Mar 13 10:32:41.142182 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 13 10:32:41.762375 master-0 kubenswrapper[4091]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:32:41.762375 master-0 kubenswrapper[4091]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 13 10:32:41.762375 master-0 kubenswrapper[4091]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:32:41.762375 master-0 kubenswrapper[4091]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:32:41.762375 master-0 kubenswrapper[4091]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 13 10:32:41.762375 master-0 kubenswrapper[4091]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:32:41.765761 master-0 kubenswrapper[4091]: I0313 10:32:41.765290 4091 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769754 4091 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769784 4091 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769789 4091 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769794 4091 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769798 4091 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769802 4091 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769806 4091 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769810 4091 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769814 4091 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769818 4091 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769822 4091 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769826 4091 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769829 4091 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769834 4091 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769837 4091 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769840 4091 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769844 4091 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769847 4091 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769851 4091 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:32:41.769823 master-0 kubenswrapper[4091]: W0313 10:32:41.769855 4091 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769858 4091 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769862 4091 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769865 4091 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769869 4091 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769872 4091 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769876 4091 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769880 4091 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769883 4091 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769887 4091 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769897 4091 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769903 4091 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769910 4091 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769917 4091 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769927 4091 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769932 4091 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769936 4091 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769943 4091 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:32:41.770614 master-0 kubenswrapper[4091]: W0313 10:32:41.769950 4091 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.769957 4091 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.769962 4091 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.769969 4091 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.769974 4091 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.769978 4091 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.769983 4091 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.769989 4091 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.769993 4091 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.769998 4091 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.770003 4091 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.770009 4091 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.770014 4091 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.770018 4091 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.770023 4091 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.770029 4091 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.770035 4091 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.770039 4091 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.770043 4091 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.770047 4091 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:32:41.771041 master-0 kubenswrapper[4091]: W0313 10:32:41.770051 4091 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: W0313 10:32:41.770055 4091 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: W0313 10:32:41.770058 4091 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: W0313 10:32:41.770062 4091 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: W0313 10:32:41.770066 4091 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: W0313 10:32:41.770069 4091 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: W0313 10:32:41.770073 4091 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: W0313 10:32:41.770076 4091 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: W0313 10:32:41.770080 4091 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: W0313 10:32:41.770084 4091 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: W0313 10:32:41.770087 4091 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: W0313 10:32:41.770091 4091 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: W0313 10:32:41.770095 4091 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: W0313 10:32:41.770098 4091 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: W0313 10:32:41.770102 4091 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: I0313 10:32:41.770817 4091 flags.go:64] FLAG: --address="0.0.0.0" Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: I0313 10:32:41.770833 4091 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: I0313 10:32:41.770843 4091 flags.go:64] FLAG: --anonymous-auth="true" Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: I0313 10:32:41.770849 4091 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: I0313 10:32:41.770857 4091 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: I0313 10:32:41.770862 4091 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 13 10:32:41.771522 master-0 kubenswrapper[4091]: I0313 10:32:41.770877 4091 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770883 4091 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770890 4091 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770897 4091 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770908 4091 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770915 4091 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770922 4091 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770934 4091 flags.go:64] FLAG: --cgroup-root="" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770942 4091 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770949 4091 flags.go:64] FLAG: --client-ca-file="" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770954 4091 flags.go:64] FLAG: --cloud-config="" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770958 4091 flags.go:64] FLAG: --cloud-provider="" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770963 4091 flags.go:64] FLAG: --cluster-dns="[]" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770970 4091 flags.go:64] FLAG: --cluster-domain="" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770975 4091 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770980 4091 flags.go:64] FLAG: --config-dir="" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770986 4091 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770991 4091 flags.go:64] FLAG: --container-log-max-files="5" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.770999 4091 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.771004 4091 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.771010 4091 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.771016 4091 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.771022 4091 flags.go:64] FLAG: --contention-profiling="false" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.771026 4091 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.771031 4091 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 13 10:32:41.772182 master-0 kubenswrapper[4091]: I0313 10:32:41.771036 4091 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771040 4091 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771047 4091 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771051 4091 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771056 4091 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771060 4091 flags.go:64] FLAG: --enable-load-reader="false" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771064 4091 flags.go:64] FLAG: --enable-server="true" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771068 4091 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771076 4091 flags.go:64] FLAG: --event-burst="100" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771080 4091 flags.go:64] FLAG: --event-qps="50" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771085 4091 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771090 4091 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771094 4091 flags.go:64] FLAG: --eviction-hard="" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771101 4091 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771107 4091 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771111 4091 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771116 4091 flags.go:64] FLAG: --eviction-soft="" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771120 4091 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771125 4091 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771129 4091 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771133 4091 flags.go:64] FLAG: --experimental-mounter-path="" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771138 4091 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771142 4091 flags.go:64] FLAG: --fail-swap-on="true" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771146 4091 flags.go:64] FLAG: --feature-gates="" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771152 4091 flags.go:64] FLAG: --file-check-frequency="20s" Mar 13 10:32:41.772820 master-0 kubenswrapper[4091]: I0313 10:32:41.771156 4091 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771161 4091 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771165 4091 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771170 4091 flags.go:64] FLAG: --healthz-port="10248" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771174 4091 flags.go:64] FLAG: --help="false" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771178 4091 flags.go:64] FLAG: --hostname-override="" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771183 4091 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771188 4091 flags.go:64] FLAG: --http-check-frequency="20s" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771192 4091 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771197 4091 flags.go:64] FLAG: --image-credential-provider-config="" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771201 4091 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771206 4091 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771210 4091 flags.go:64] FLAG: --image-service-endpoint="" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771214 4091 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771218 4091 flags.go:64] FLAG: --kube-api-burst="100" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771223 4091 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771227 4091 flags.go:64] FLAG: --kube-api-qps="50" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771232 4091 flags.go:64] FLAG: --kube-reserved="" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771236 4091 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771241 4091 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771245 4091 flags.go:64] FLAG: --kubelet-cgroups="" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771250 4091 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771256 4091 flags.go:64] FLAG: --lock-file="" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771260 4091 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771265 4091 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 13 10:32:41.773490 master-0 kubenswrapper[4091]: I0313 10:32:41.771269 4091 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771277 4091 flags.go:64] FLAG: --log-json-split-stream="false" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771281 4091 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771286 4091 flags.go:64] FLAG: --log-text-split-stream="false" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771290 4091 flags.go:64] FLAG: --logging-format="text" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771294 4091 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771300 4091 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771304 4091 flags.go:64] FLAG: --manifest-url="" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771308 4091 flags.go:64] FLAG: --manifest-url-header="" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771314 4091 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771319 4091 flags.go:64] FLAG: --max-open-files="1000000" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771325 4091 flags.go:64] FLAG: --max-pods="110" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771330 4091 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771334 4091 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771338 4091 flags.go:64] FLAG: --memory-manager-policy="None" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771343 4091 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771348 4091 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771352 4091 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771357 4091 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771369 4091 flags.go:64] FLAG: --node-status-max-images="50" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771374 4091 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771378 4091 flags.go:64] FLAG: --oom-score-adj="-999" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771383 4091 flags.go:64] FLAG: --pod-cidr="" Mar 13 10:32:41.774140 master-0 kubenswrapper[4091]: I0313 10:32:41.771387 4091 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771394 4091 flags.go:64] FLAG: --pod-manifest-path="" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771399 4091 flags.go:64] FLAG: --pod-max-pids="-1" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771404 4091 flags.go:64] FLAG: --pods-per-core="0" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771408 4091 flags.go:64] FLAG: --port="10250" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771413 4091 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771418 4091 flags.go:64] FLAG: --provider-id="" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771422 4091 flags.go:64] FLAG: --qos-reserved="" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771427 4091 flags.go:64] FLAG: --read-only-port="10255" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771431 4091 flags.go:64] FLAG: --register-node="true" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771436 4091 flags.go:64] FLAG: --register-schedulable="true" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771441 4091 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771450 4091 flags.go:64] FLAG: --registry-burst="10" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771454 4091 flags.go:64] FLAG: --registry-qps="5" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771458 4091 flags.go:64] FLAG: --reserved-cpus="" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771463 4091 flags.go:64] FLAG: --reserved-memory="" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771469 4091 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771473 4091 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771478 4091 flags.go:64] FLAG: --rotate-certificates="false" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771483 4091 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771487 4091 flags.go:64] FLAG: --runonce="false" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771492 4091 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771496 4091 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771501 4091 flags.go:64] FLAG: --seccomp-default="false" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771505 4091 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771509 4091 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 13 10:32:41.774701 master-0 kubenswrapper[4091]: I0313 10:32:41.771514 4091 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771518 4091 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771523 4091 flags.go:64] FLAG: --storage-driver-password="root" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771528 4091 flags.go:64] FLAG: --storage-driver-secure="false" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771532 4091 flags.go:64] FLAG: --storage-driver-table="stats" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771538 4091 flags.go:64] FLAG: --storage-driver-user="root" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771542 4091 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771546 4091 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771550 4091 flags.go:64] FLAG: --system-cgroups="" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771555 4091 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771561 4091 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771566 4091 flags.go:64] FLAG: --tls-cert-file="" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771572 4091 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771578 4091 flags.go:64] FLAG: --tls-min-version="" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771582 4091 flags.go:64] FLAG: --tls-private-key-file="" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771601 4091 flags.go:64] FLAG: --topology-manager-policy="none" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771605 4091 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771609 4091 flags.go:64] FLAG: --topology-manager-scope="container" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771614 4091 flags.go:64] FLAG: --v="2" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771630 4091 flags.go:64] FLAG: --version="false" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771637 4091 flags.go:64] FLAG: --vmodule="" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771643 4091 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: I0313 10:32:41.771648 4091 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: W0313 10:32:41.771799 4091 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:32:41.775296 master-0 kubenswrapper[4091]: W0313 10:32:41.771811 4091 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771818 4091 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771823 4091 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771827 4091 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771832 4091 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771836 4091 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771841 4091 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771845 4091 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771849 4091 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771853 4091 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771856 4091 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771860 4091 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771864 4091 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771868 4091 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771872 4091 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771875 4091 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771879 4091 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771883 4091 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771887 4091 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771891 4091 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:32:41.775857 master-0 kubenswrapper[4091]: W0313 10:32:41.771899 4091 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771903 4091 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771906 4091 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771910 4091 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771913 4091 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771917 4091 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771921 4091 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771925 4091 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771928 4091 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771932 4091 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771935 4091 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771941 4091 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771946 4091 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771950 4091 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771953 4091 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771957 4091 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771961 4091 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771964 4091 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771968 4091 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771971 4091 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:32:41.776306 master-0 kubenswrapper[4091]: W0313 10:32:41.771975 4091 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.771978 4091 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.771983 4091 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.771988 4091 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.771992 4091 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.771996 4091 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772000 4091 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772005 4091 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772009 4091 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772013 4091 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772017 4091 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772021 4091 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772027 4091 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772031 4091 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772035 4091 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772038 4091 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772042 4091 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772046 4091 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772050 4091 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772053 4091 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:32:41.776775 master-0 kubenswrapper[4091]: W0313 10:32:41.772057 4091 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:32:41.777234 master-0 kubenswrapper[4091]: W0313 10:32:41.772060 4091 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:32:41.777234 master-0 kubenswrapper[4091]: W0313 10:32:41.772065 4091 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:32:41.777234 master-0 kubenswrapper[4091]: W0313 10:32:41.772072 4091 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:32:41.777234 master-0 kubenswrapper[4091]: W0313 10:32:41.772076 4091 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:32:41.777234 master-0 kubenswrapper[4091]: W0313 10:32:41.772080 4091 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:32:41.777234 master-0 kubenswrapper[4091]: W0313 10:32:41.772085 4091 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:32:41.777234 master-0 kubenswrapper[4091]: W0313 10:32:41.772089 4091 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:32:41.777234 master-0 kubenswrapper[4091]: W0313 10:32:41.772093 4091 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:32:41.777234 master-0 kubenswrapper[4091]: W0313 10:32:41.772097 4091 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:32:41.777234 master-0 kubenswrapper[4091]: W0313 10:32:41.772101 4091 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:32:41.777234 master-0 kubenswrapper[4091]: I0313 10:32:41.772115 4091 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 10:32:41.785355 master-0 kubenswrapper[4091]: I0313 10:32:41.785299 4091 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 13 10:32:41.785355 master-0 kubenswrapper[4091]: I0313 10:32:41.785348 4091 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 10:32:41.785522 master-0 kubenswrapper[4091]: W0313 10:32:41.785499 4091 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:32:41.785522 master-0 kubenswrapper[4091]: W0313 10:32:41.785517 4091 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:32:41.785522 master-0 kubenswrapper[4091]: W0313 10:32:41.785524 4091 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:32:41.785618 master-0 kubenswrapper[4091]: W0313 10:32:41.785531 4091 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:32:41.785618 master-0 kubenswrapper[4091]: W0313 10:32:41.785537 4091 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:32:41.785618 master-0 kubenswrapper[4091]: W0313 10:32:41.785543 4091 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:32:41.785618 master-0 kubenswrapper[4091]: W0313 10:32:41.785549 4091 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:32:41.785618 master-0 kubenswrapper[4091]: W0313 10:32:41.785554 4091 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:32:41.785618 master-0 kubenswrapper[4091]: W0313 10:32:41.785560 4091 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:32:41.785618 master-0 kubenswrapper[4091]: W0313 10:32:41.785565 4091 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:32:41.785618 master-0 kubenswrapper[4091]: W0313 10:32:41.785569 4091 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:32:41.785618 master-0 kubenswrapper[4091]: W0313 10:32:41.785575 4091 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:32:41.785618 master-0 kubenswrapper[4091]: W0313 10:32:41.785607 4091 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:32:41.785618 master-0 kubenswrapper[4091]: W0313 10:32:41.785620 4091 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785631 4091 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785642 4091 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785651 4091 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785658 4091 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785665 4091 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785670 4091 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785676 4091 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785681 4091 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785686 4091 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785691 4091 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785696 4091 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785705 4091 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785711 4091 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785716 4091 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785721 4091 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785727 4091 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785732 4091 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785737 4091 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:32:41.786038 master-0 kubenswrapper[4091]: W0313 10:32:41.785741 4091 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785746 4091 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785751 4091 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785756 4091 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785761 4091 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785767 4091 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785771 4091 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785777 4091 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785782 4091 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785786 4091 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785791 4091 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785796 4091 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785801 4091 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785806 4091 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785811 4091 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785816 4091 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785821 4091 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785826 4091 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785831 4091 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785836 4091 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:32:41.786465 master-0 kubenswrapper[4091]: W0313 10:32:41.785843 4091 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785850 4091 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785856 4091 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785861 4091 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785867 4091 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785873 4091 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785878 4091 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785884 4091 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785891 4091 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785901 4091 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785914 4091 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785922 4091 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785930 4091 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785937 4091 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785945 4091 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785972 4091 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785980 4091 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785987 4091 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.785994 4091 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:32:41.787145 master-0 kubenswrapper[4091]: W0313 10:32:41.786001 4091 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: I0313 10:32:41.786010 4091 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: W0313 10:32:41.786182 4091 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: W0313 10:32:41.786192 4091 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: W0313 10:32:41.786200 4091 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: W0313 10:32:41.786206 4091 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: W0313 10:32:41.786211 4091 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: W0313 10:32:41.786216 4091 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: W0313 10:32:41.786221 4091 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: W0313 10:32:41.786227 4091 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: W0313 10:32:41.786233 4091 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: W0313 10:32:41.786240 4091 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: W0313 10:32:41.786247 4091 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: W0313 10:32:41.786253 4091 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: W0313 10:32:41.786261 4091 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:32:41.787613 master-0 kubenswrapper[4091]: W0313 10:32:41.786268 4091 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786273 4091 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786278 4091 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786283 4091 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786289 4091 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786294 4091 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786299 4091 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786304 4091 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786309 4091 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786313 4091 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786319 4091 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786325 4091 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786330 4091 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786335 4091 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786341 4091 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786348 4091 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786353 4091 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786358 4091 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786363 4091 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786369 4091 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:32:41.788029 master-0 kubenswrapper[4091]: W0313 10:32:41.786374 4091 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786379 4091 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786384 4091 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786389 4091 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786394 4091 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786399 4091 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786406 4091 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786411 4091 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786416 4091 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786421 4091 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786426 4091 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786432 4091 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786437 4091 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786443 4091 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786447 4091 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786452 4091 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786458 4091 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786464 4091 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786469 4091 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786475 4091 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:32:41.788476 master-0 kubenswrapper[4091]: W0313 10:32:41.786480 4091 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786486 4091 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786493 4091 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786500 4091 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786505 4091 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786512 4091 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786517 4091 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786523 4091 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786528 4091 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786533 4091 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786540 4091 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786546 4091 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786552 4091 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786557 4091 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786562 4091 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786567 4091 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786572 4091 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786577 4091 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:32:41.788935 master-0 kubenswrapper[4091]: W0313 10:32:41.786604 4091 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:32:41.789344 master-0 kubenswrapper[4091]: I0313 10:32:41.786614 4091 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 10:32:41.789344 master-0 kubenswrapper[4091]: I0313 10:32:41.786853 4091 server.go:940] "Client rotation is on, will bootstrap in background" Mar 13 10:32:41.790056 master-0 kubenswrapper[4091]: I0313 10:32:41.790020 4091 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 13 10:32:41.791276 master-0 kubenswrapper[4091]: I0313 10:32:41.791243 4091 server.go:997] "Starting client certificate rotation" Mar 13 10:32:41.791324 master-0 kubenswrapper[4091]: I0313 10:32:41.791286 4091 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 13 10:32:41.791542 master-0 kubenswrapper[4091]: I0313 10:32:41.791463 4091 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 10:32:41.819181 master-0 kubenswrapper[4091]: I0313 10:32:41.819095 4091 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 10:32:41.821560 master-0 kubenswrapper[4091]: I0313 10:32:41.821496 4091 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 10:32:41.824485 master-0 kubenswrapper[4091]: E0313 10:32:41.824378 4091 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:41.844170 master-0 kubenswrapper[4091]: I0313 10:32:41.844105 4091 log.go:25] "Validated CRI v1 runtime API" Mar 13 10:32:41.852808 master-0 kubenswrapper[4091]: I0313 10:32:41.852765 4091 log.go:25] "Validated CRI v1 image API" Mar 13 10:32:41.855386 master-0 kubenswrapper[4091]: I0313 10:32:41.855338 4091 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 10:32:41.861468 master-0 kubenswrapper[4091]: I0313 10:32:41.861399 4091 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 b89da96d-e8b7-46f7-a5b4-754b0b40734d:/dev/vda3] Mar 13 10:32:41.861468 master-0 kubenswrapper[4091]: I0313 10:32:41.861430 4091 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 13 10:32:41.883578 master-0 kubenswrapper[4091]: I0313 10:32:41.883145 4091 manager.go:217] Machine: {Timestamp:2026-03-13 10:32:41.880534456 +0000 UTC m=+0.569256948 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:0b3c13f41020471d8d074d77a948365d SystemUUID:0b3c13f4-1020-471d-8d07-4d77a948365d BootID:8a9973c8-4daa-47e3-857d-01825c17d4bc Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:e1:20:b5 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:f6:43:7d Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:fe:76:e9:ad:1f:61 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 13 10:32:41.883578 master-0 kubenswrapper[4091]: I0313 10:32:41.883486 4091 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 13 10:32:41.883896 master-0 kubenswrapper[4091]: I0313 10:32:41.883835 4091 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 13 10:32:41.885534 master-0 kubenswrapper[4091]: I0313 10:32:41.885489 4091 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 13 10:32:41.885841 master-0 kubenswrapper[4091]: I0313 10:32:41.885780 4091 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 10:32:41.886126 master-0 kubenswrapper[4091]: I0313 10:32:41.885831 4091 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 10:32:41.886200 master-0 kubenswrapper[4091]: I0313 10:32:41.886147 4091 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 10:32:41.886200 master-0 kubenswrapper[4091]: I0313 10:32:41.886161 4091 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 10:32:41.886200 master-0 kubenswrapper[4091]: I0313 10:32:41.886180 4091 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 10:32:41.886370 master-0 kubenswrapper[4091]: I0313 10:32:41.886212 4091 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 10:32:41.886444 master-0 kubenswrapper[4091]: I0313 10:32:41.886375 4091 state_mem.go:36] "Initialized new in-memory state store" Mar 13 10:32:41.886504 master-0 kubenswrapper[4091]: I0313 10:32:41.886483 4091 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 13 10:32:41.890876 master-0 kubenswrapper[4091]: I0313 10:32:41.890840 4091 kubelet.go:418] "Attempting to sync node with API server" Mar 13 10:32:41.890876 master-0 kubenswrapper[4091]: I0313 10:32:41.890876 4091 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 10:32:41.890991 master-0 kubenswrapper[4091]: I0313 10:32:41.890904 4091 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 13 10:32:41.890991 master-0 kubenswrapper[4091]: I0313 10:32:41.890924 4091 kubelet.go:324] "Adding apiserver pod source" Mar 13 10:32:41.890991 master-0 kubenswrapper[4091]: I0313 10:32:41.890942 4091 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 10:32:41.896669 master-0 kubenswrapper[4091]: I0313 10:32:41.896556 4091 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 13 10:32:41.897755 master-0 kubenswrapper[4091]: W0313 10:32:41.897655 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:41.897755 master-0 kubenswrapper[4091]: W0313 10:32:41.897674 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:41.897879 master-0 kubenswrapper[4091]: E0313 10:32:41.897776 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:41.897879 master-0 kubenswrapper[4091]: E0313 10:32:41.897797 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:41.898089 master-0 kubenswrapper[4091]: I0313 10:32:41.898054 4091 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 13 10:32:41.898380 master-0 kubenswrapper[4091]: I0313 10:32:41.898347 4091 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 13 10:32:41.898380 master-0 kubenswrapper[4091]: I0313 10:32:41.898376 4091 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 13 10:32:41.898477 master-0 kubenswrapper[4091]: I0313 10:32:41.898387 4091 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 13 10:32:41.898477 master-0 kubenswrapper[4091]: I0313 10:32:41.898405 4091 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 13 10:32:41.898477 master-0 kubenswrapper[4091]: I0313 10:32:41.898414 4091 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 13 10:32:41.898477 master-0 kubenswrapper[4091]: I0313 10:32:41.898423 4091 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 13 10:32:41.898477 master-0 kubenswrapper[4091]: I0313 10:32:41.898433 4091 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 13 10:32:41.898477 master-0 kubenswrapper[4091]: I0313 10:32:41.898442 4091 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 13 10:32:41.898477 master-0 kubenswrapper[4091]: I0313 10:32:41.898472 4091 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 13 10:32:41.898477 master-0 kubenswrapper[4091]: I0313 10:32:41.898483 4091 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 13 10:32:41.898753 master-0 kubenswrapper[4091]: I0313 10:32:41.898509 4091 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 13 10:32:41.898753 master-0 kubenswrapper[4091]: I0313 10:32:41.898637 4091 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 13 10:32:41.900525 master-0 kubenswrapper[4091]: I0313 10:32:41.900490 4091 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 13 10:32:41.901132 master-0 kubenswrapper[4091]: I0313 10:32:41.901098 4091 server.go:1280] "Started kubelet" Mar 13 10:32:41.902411 master-0 kubenswrapper[4091]: I0313 10:32:41.902281 4091 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 10:32:41.902461 master-0 kubenswrapper[4091]: I0313 10:32:41.902436 4091 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 13 10:32:41.902995 master-0 kubenswrapper[4091]: I0313 10:32:41.902967 4091 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 10:32:41.903317 master-0 kubenswrapper[4091]: I0313 10:32:41.902199 4091 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 10:32:41.903348 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 13 10:32:41.909617 master-0 kubenswrapper[4091]: I0313 10:32:41.905644 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:41.913928 master-0 kubenswrapper[4091]: E0313 10:32:41.912666 4091 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c600cec83bda5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.901063589 +0000 UTC m=+0.589786061,LastTimestamp:2026-03-13 10:32:41.901063589 +0000 UTC m=+0.589786061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:41.914272 master-0 kubenswrapper[4091]: I0313 10:32:41.914231 4091 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 13 10:32:41.914319 master-0 kubenswrapper[4091]: I0313 10:32:41.914280 4091 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 10:32:41.920056 master-0 kubenswrapper[4091]: I0313 10:32:41.920018 4091 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 13 10:32:41.920056 master-0 kubenswrapper[4091]: I0313 10:32:41.920046 4091 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 13 10:32:41.922435 master-0 kubenswrapper[4091]: I0313 10:32:41.922397 4091 server.go:449] "Adding debug handlers to kubelet server" Mar 13 10:32:41.922687 master-0 kubenswrapper[4091]: E0313 10:32:41.922629 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:32:41.923240 master-0 kubenswrapper[4091]: I0313 10:32:41.923196 4091 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 13 10:32:41.923364 master-0 kubenswrapper[4091]: I0313 10:32:41.923330 4091 reconstruct.go:97] "Volume reconstruction finished" Mar 13 10:32:41.923364 master-0 kubenswrapper[4091]: I0313 10:32:41.923350 4091 reconciler.go:26] "Reconciler: start to sync state" Mar 13 10:32:41.923610 master-0 kubenswrapper[4091]: W0313 10:32:41.923495 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:41.923695 master-0 kubenswrapper[4091]: E0313 10:32:41.923653 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:41.924963 master-0 kubenswrapper[4091]: I0313 10:32:41.924889 4091 factory.go:55] Registering systemd factory Mar 13 10:32:41.924963 master-0 kubenswrapper[4091]: I0313 10:32:41.924953 4091 factory.go:221] Registration of the systemd container factory successfully Mar 13 10:32:41.925092 master-0 kubenswrapper[4091]: E0313 10:32:41.924923 4091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 13 10:32:41.925410 master-0 kubenswrapper[4091]: I0313 10:32:41.925383 4091 factory.go:153] Registering CRI-O factory Mar 13 10:32:41.925657 master-0 kubenswrapper[4091]: I0313 10:32:41.925633 4091 factory.go:221] Registration of the crio container factory successfully Mar 13 10:32:41.925809 master-0 kubenswrapper[4091]: I0313 10:32:41.925784 4091 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 13 10:32:41.925848 master-0 kubenswrapper[4091]: I0313 10:32:41.925822 4091 factory.go:103] Registering Raw factory Mar 13 10:32:41.925848 master-0 kubenswrapper[4091]: I0313 10:32:41.925842 4091 manager.go:1196] Started watching for new ooms in manager Mar 13 10:32:41.925907 master-0 kubenswrapper[4091]: E0313 10:32:41.925847 4091 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 13 10:32:41.926513 master-0 kubenswrapper[4091]: I0313 10:32:41.926486 4091 manager.go:319] Starting recovery of all containers Mar 13 10:32:41.943053 master-0 kubenswrapper[4091]: I0313 10:32:41.943000 4091 manager.go:324] Recovery completed Mar 13 10:32:41.956148 master-0 kubenswrapper[4091]: I0313 10:32:41.956088 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:41.974675 master-0 kubenswrapper[4091]: I0313 10:32:41.974548 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:41.974675 master-0 kubenswrapper[4091]: I0313 10:32:41.974675 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:41.974935 master-0 kubenswrapper[4091]: I0313 10:32:41.974693 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:41.976052 master-0 kubenswrapper[4091]: I0313 10:32:41.976007 4091 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 13 10:32:41.976052 master-0 kubenswrapper[4091]: I0313 10:32:41.976037 4091 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 10:32:41.976126 master-0 kubenswrapper[4091]: I0313 10:32:41.976086 4091 state_mem.go:36] "Initialized new in-memory state store" Mar 13 10:32:42.023173 master-0 kubenswrapper[4091]: E0313 10:32:42.023019 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:32:42.053389 master-0 kubenswrapper[4091]: I0313 10:32:42.053236 4091 policy_none.go:49] "None policy: Start" Mar 13 10:32:42.267672 master-0 kubenswrapper[4091]: I0313 10:32:42.055089 4091 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 13 10:32:42.267672 master-0 kubenswrapper[4091]: I0313 10:32:42.055197 4091 state_mem.go:35] "Initializing new in-memory state store" Mar 13 10:32:42.267672 master-0 kubenswrapper[4091]: E0313 10:32:42.123417 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:32:42.267672 master-0 kubenswrapper[4091]: E0313 10:32:42.127171 4091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 13 10:32:42.267672 master-0 kubenswrapper[4091]: I0313 10:32:42.194253 4091 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 13 10:32:42.267672 master-0 kubenswrapper[4091]: I0313 10:32:42.195984 4091 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 13 10:32:42.267672 master-0 kubenswrapper[4091]: I0313 10:32:42.201473 4091 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 13 10:32:42.267672 master-0 kubenswrapper[4091]: I0313 10:32:42.201550 4091 kubelet.go:2335] "Starting kubelet main sync loop" Mar 13 10:32:42.267672 master-0 kubenswrapper[4091]: E0313 10:32:42.201870 4091 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 10:32:42.267672 master-0 kubenswrapper[4091]: W0313 10:32:42.203002 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:42.267672 master-0 kubenswrapper[4091]: E0313 10:32:42.203083 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:42.267672 master-0 kubenswrapper[4091]: E0313 10:32:42.224341 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:32:42.302922 master-0 kubenswrapper[4091]: E0313 10:32:42.302382 4091 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 10:32:42.316121 master-0 kubenswrapper[4091]: I0313 10:32:42.316023 4091 manager.go:334] "Starting Device Plugin manager" Mar 13 10:32:42.316121 master-0 kubenswrapper[4091]: I0313 10:32:42.316124 4091 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 13 10:32:42.316121 master-0 kubenswrapper[4091]: I0313 10:32:42.316146 4091 server.go:79] "Starting device plugin registration server" Mar 13 10:32:42.316961 master-0 kubenswrapper[4091]: I0313 10:32:42.316915 4091 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 10:32:42.317093 master-0 kubenswrapper[4091]: I0313 10:32:42.317025 4091 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 10:32:42.317379 master-0 kubenswrapper[4091]: I0313 10:32:42.317332 4091 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 13 10:32:42.317484 master-0 kubenswrapper[4091]: I0313 10:32:42.317460 4091 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 13 10:32:42.317484 master-0 kubenswrapper[4091]: I0313 10:32:42.317480 4091 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 10:32:42.319902 master-0 kubenswrapper[4091]: E0313 10:32:42.319854 4091 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 10:32:42.418208 master-0 kubenswrapper[4091]: I0313 10:32:42.418132 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:42.419637 master-0 kubenswrapper[4091]: I0313 10:32:42.419602 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:42.419755 master-0 kubenswrapper[4091]: I0313 10:32:42.419653 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:42.419755 master-0 kubenswrapper[4091]: I0313 10:32:42.419670 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:42.419755 master-0 kubenswrapper[4091]: I0313 10:32:42.419709 4091 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:32:42.421036 master-0 kubenswrapper[4091]: E0313 10:32:42.420961 4091 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 10:32:42.504031 master-0 kubenswrapper[4091]: I0313 10:32:42.503861 4091 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 10:32:42.504531 master-0 kubenswrapper[4091]: I0313 10:32:42.504124 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:42.505711 master-0 kubenswrapper[4091]: I0313 10:32:42.505669 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:42.505711 master-0 kubenswrapper[4091]: I0313 10:32:42.505703 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:42.505832 master-0 kubenswrapper[4091]: I0313 10:32:42.505729 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:42.505832 master-0 kubenswrapper[4091]: I0313 10:32:42.505831 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:42.506370 master-0 kubenswrapper[4091]: I0313 10:32:42.506316 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:32:42.506448 master-0 kubenswrapper[4091]: I0313 10:32:42.506421 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:42.506891 master-0 kubenswrapper[4091]: I0313 10:32:42.506848 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:42.506971 master-0 kubenswrapper[4091]: I0313 10:32:42.506928 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:42.506971 master-0 kubenswrapper[4091]: I0313 10:32:42.506958 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:42.507276 master-0 kubenswrapper[4091]: I0313 10:32:42.507249 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:42.507486 master-0 kubenswrapper[4091]: I0313 10:32:42.507436 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:32:42.507532 master-0 kubenswrapper[4091]: I0313 10:32:42.507517 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:42.508111 master-0 kubenswrapper[4091]: I0313 10:32:42.508071 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:42.508204 master-0 kubenswrapper[4091]: I0313 10:32:42.508130 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:42.508204 master-0 kubenswrapper[4091]: I0313 10:32:42.508154 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:42.508700 master-0 kubenswrapper[4091]: I0313 10:32:42.508674 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:42.508763 master-0 kubenswrapper[4091]: I0313 10:32:42.508717 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:42.508763 master-0 kubenswrapper[4091]: I0313 10:32:42.508735 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:42.508892 master-0 kubenswrapper[4091]: I0313 10:32:42.508856 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:42.508958 master-0 kubenswrapper[4091]: I0313 10:32:42.508929 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:42.508958 master-0 kubenswrapper[4091]: I0313 10:32:42.508948 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:42.509055 master-0 kubenswrapper[4091]: I0313 10:32:42.509004 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:42.509204 master-0 kubenswrapper[4091]: I0313 10:32:42.509172 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.509262 master-0 kubenswrapper[4091]: I0313 10:32:42.509228 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:42.509993 master-0 kubenswrapper[4091]: I0313 10:32:42.509963 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:42.510051 master-0 kubenswrapper[4091]: I0313 10:32:42.510011 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:42.510051 master-0 kubenswrapper[4091]: I0313 10:32:42.510030 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:42.510175 master-0 kubenswrapper[4091]: I0313 10:32:42.510143 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:42.510216 master-0 kubenswrapper[4091]: I0313 10:32:42.510176 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:42.510216 master-0 kubenswrapper[4091]: I0313 10:32:42.510190 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:42.510271 master-0 kubenswrapper[4091]: I0313 10:32:42.510250 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.510299 master-0 kubenswrapper[4091]: I0313 10:32:42.510286 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:42.510431 master-0 kubenswrapper[4091]: I0313 10:32:42.510175 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:42.511251 master-0 kubenswrapper[4091]: I0313 10:32:42.511211 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:42.511311 master-0 kubenswrapper[4091]: I0313 10:32:42.511257 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:42.511311 master-0 kubenswrapper[4091]: I0313 10:32:42.511276 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:42.511707 master-0 kubenswrapper[4091]: I0313 10:32:42.511671 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:42.511707 master-0 kubenswrapper[4091]: I0313 10:32:42.511705 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:42.511778 master-0 kubenswrapper[4091]: I0313 10:32:42.511723 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:42.511967 master-0 kubenswrapper[4091]: I0313 10:32:42.511877 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:32:42.511967 master-0 kubenswrapper[4091]: I0313 10:32:42.511924 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:42.513069 master-0 kubenswrapper[4091]: I0313 10:32:42.513022 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:42.513127 master-0 kubenswrapper[4091]: I0313 10:32:42.513079 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:42.513127 master-0 kubenswrapper[4091]: I0313 10:32:42.513099 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:42.525865 master-0 kubenswrapper[4091]: I0313 10:32:42.525810 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:32:42.525865 master-0 kubenswrapper[4091]: I0313 10:32:42.525866 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:32:42.526053 master-0 kubenswrapper[4091]: I0313 10:32:42.525907 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:32:42.526053 master-0 kubenswrapper[4091]: I0313 10:32:42.525945 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:32:42.526053 master-0 kubenswrapper[4091]: I0313 10:32:42.526025 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.526245 master-0 kubenswrapper[4091]: I0313 10:32:42.526114 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.526245 master-0 kubenswrapper[4091]: I0313 10:32:42.526207 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:32:42.526359 master-0 kubenswrapper[4091]: I0313 10:32:42.526260 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.526359 master-0 kubenswrapper[4091]: I0313 10:32:42.526290 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.526359 master-0 kubenswrapper[4091]: I0313 10:32:42.526309 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.526359 master-0 kubenswrapper[4091]: I0313 10:32:42.526328 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.526359 master-0 kubenswrapper[4091]: I0313 10:32:42.526344 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.526359 master-0 kubenswrapper[4091]: I0313 10:32:42.526360 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:32:42.526766 master-0 kubenswrapper[4091]: I0313 10:32:42.526410 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.526766 master-0 kubenswrapper[4091]: I0313 10:32:42.526461 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.526766 master-0 kubenswrapper[4091]: I0313 10:32:42.526499 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.526766 master-0 kubenswrapper[4091]: I0313 10:32:42.526535 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.529244 master-0 kubenswrapper[4091]: E0313 10:32:42.529164 4091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 13 10:32:42.621454 master-0 kubenswrapper[4091]: I0313 10:32:42.621387 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:42.622530 master-0 kubenswrapper[4091]: I0313 10:32:42.622509 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:42.622632 master-0 kubenswrapper[4091]: I0313 10:32:42.622546 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:42.622632 master-0 kubenswrapper[4091]: I0313 10:32:42.622557 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:42.622632 master-0 kubenswrapper[4091]: I0313 10:32:42.622622 4091 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:32:42.623482 master-0 kubenswrapper[4091]: E0313 10:32:42.623442 4091 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 10:32:42.627712 master-0 kubenswrapper[4091]: I0313 10:32:42.627671 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:32:42.627802 master-0 kubenswrapper[4091]: I0313 10:32:42.627721 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:32:42.627802 master-0 kubenswrapper[4091]: I0313 10:32:42.627748 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:32:42.627802 master-0 kubenswrapper[4091]: I0313 10:32:42.627797 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:32:42.627941 master-0 kubenswrapper[4091]: I0313 10:32:42.627817 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.627941 master-0 kubenswrapper[4091]: I0313 10:32:42.627865 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:32:42.627941 master-0 kubenswrapper[4091]: I0313 10:32:42.627827 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:32:42.627941 master-0 kubenswrapper[4091]: I0313 10:32:42.627922 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.628123 master-0 kubenswrapper[4091]: I0313 10:32:42.627962 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.628123 master-0 kubenswrapper[4091]: I0313 10:32:42.627893 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:32:42.628205 master-0 kubenswrapper[4091]: I0313 10:32:42.628136 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.628205 master-0 kubenswrapper[4091]: I0313 10:32:42.628187 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:32:42.628291 master-0 kubenswrapper[4091]: I0313 10:32:42.628214 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:32:42.628291 master-0 kubenswrapper[4091]: I0313 10:32:42.628241 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.628291 master-0 kubenswrapper[4091]: I0313 10:32:42.628266 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.628426 master-0 kubenswrapper[4091]: I0313 10:32:42.628296 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.628426 master-0 kubenswrapper[4091]: I0313 10:32:42.628336 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.628426 master-0 kubenswrapper[4091]: I0313 10:32:42.628351 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.628426 master-0 kubenswrapper[4091]: I0313 10:32:42.628373 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.628426 master-0 kubenswrapper[4091]: I0313 10:32:42.628377 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:32:42.628426 master-0 kubenswrapper[4091]: I0313 10:32:42.628406 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.628426 master-0 kubenswrapper[4091]: I0313 10:32:42.628386 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.628728 master-0 kubenswrapper[4091]: I0313 10:32:42.628357 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.628728 master-0 kubenswrapper[4091]: I0313 10:32:42.628453 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.628728 master-0 kubenswrapper[4091]: I0313 10:32:42.628532 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:32:42.628728 master-0 kubenswrapper[4091]: I0313 10:32:42.628618 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.628728 master-0 kubenswrapper[4091]: I0313 10:32:42.628636 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:32:42.628728 master-0 kubenswrapper[4091]: I0313 10:32:42.628662 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.628728 master-0 kubenswrapper[4091]: I0313 10:32:42.628686 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.628728 master-0 kubenswrapper[4091]: I0313 10:32:42.628704 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.629023 master-0 kubenswrapper[4091]: I0313 10:32:42.628741 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.629023 master-0 kubenswrapper[4091]: I0313 10:32:42.628744 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.629023 master-0 kubenswrapper[4091]: I0313 10:32:42.628806 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.629023 master-0 kubenswrapper[4091]: I0313 10:32:42.628812 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.773894 master-0 kubenswrapper[4091]: W0313 10:32:42.773776 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:42.774938 master-0 kubenswrapper[4091]: E0313 10:32:42.773903 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:42.853971 master-0 kubenswrapper[4091]: I0313 10:32:42.853752 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:32:42.871541 master-0 kubenswrapper[4091]: I0313 10:32:42.871471 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:32:42.887816 master-0 kubenswrapper[4091]: I0313 10:32:42.887729 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:32:42.896482 master-0 kubenswrapper[4091]: I0313 10:32:42.896405 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:32:42.902503 master-0 kubenswrapper[4091]: I0313 10:32:42.902448 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:32:42.912737 master-0 kubenswrapper[4091]: I0313 10:32:42.912567 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:43.023920 master-0 kubenswrapper[4091]: I0313 10:32:43.023766 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:43.025157 master-0 kubenswrapper[4091]: I0313 10:32:43.025102 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:43.025157 master-0 kubenswrapper[4091]: I0313 10:32:43.025150 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:43.025157 master-0 kubenswrapper[4091]: I0313 10:32:43.025161 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:43.025445 master-0 kubenswrapper[4091]: I0313 10:32:43.025239 4091 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:32:43.026390 master-0 kubenswrapper[4091]: E0313 10:32:43.026321 4091 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 10:32:43.183893 master-0 kubenswrapper[4091]: W0313 10:32:43.183652 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:43.183893 master-0 kubenswrapper[4091]: E0313 10:32:43.183800 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:43.331284 master-0 kubenswrapper[4091]: E0313 10:32:43.331192 4091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 13 10:32:43.437026 master-0 kubenswrapper[4091]: W0313 10:32:43.436822 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:43.437026 master-0 kubenswrapper[4091]: E0313 10:32:43.436897 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:43.484844 master-0 kubenswrapper[4091]: W0313 10:32:43.484701 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:43.484844 master-0 kubenswrapper[4091]: E0313 10:32:43.484799 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:43.826774 master-0 kubenswrapper[4091]: I0313 10:32:43.826714 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:43.827890 master-0 kubenswrapper[4091]: I0313 10:32:43.827854 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:43.827890 master-0 kubenswrapper[4091]: I0313 10:32:43.827892 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:43.827890 master-0 kubenswrapper[4091]: I0313 10:32:43.827901 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:43.828028 master-0 kubenswrapper[4091]: I0313 10:32:43.827952 4091 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:32:43.828973 master-0 kubenswrapper[4091]: E0313 10:32:43.828914 4091 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 10:32:43.837158 master-0 kubenswrapper[4091]: I0313 10:32:43.837064 4091 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 10:32:43.839341 master-0 kubenswrapper[4091]: E0313 10:32:43.839278 4091 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:43.912574 master-0 kubenswrapper[4091]: I0313 10:32:43.912461 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:43.954038 master-0 kubenswrapper[4091]: W0313 10:32:43.953940 4091 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f77c8e18b751d90bc0dfe2d4e304050.slice/crio-9681f2e75cd38c3ac67ed3e69a8ec48ca8451d34a1c4febdd60d09ed10b5be76 WatchSource:0}: Error finding container 9681f2e75cd38c3ac67ed3e69a8ec48ca8451d34a1c4febdd60d09ed10b5be76: Status 404 returned error can't find the container with id 9681f2e75cd38c3ac67ed3e69a8ec48ca8451d34a1c4febdd60d09ed10b5be76 Mar 13 10:32:43.968286 master-0 kubenswrapper[4091]: I0313 10:32:43.968229 4091 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 10:32:43.982836 master-0 kubenswrapper[4091]: W0313 10:32:43.982764 4091 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a56802af72ce1aac6b5077f1695ac0.slice/crio-3a800ac5d1779f65790cfc04fd054cd45e77032d228f479c2dc831649fa5ed50 WatchSource:0}: Error finding container 3a800ac5d1779f65790cfc04fd054cd45e77032d228f479c2dc831649fa5ed50: Status 404 returned error can't find the container with id 3a800ac5d1779f65790cfc04fd054cd45e77032d228f479c2dc831649fa5ed50 Mar 13 10:32:44.019249 master-0 kubenswrapper[4091]: W0313 10:32:44.019171 4091 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9add8df47182fc2eaf8cd78016ebe72.slice/crio-2c906939264631f5617f60445cdb650e10cc3bf3d0cf16dc4b104f010debfbc1 WatchSource:0}: Error finding container 2c906939264631f5617f60445cdb650e10cc3bf3d0cf16dc4b104f010debfbc1: Status 404 returned error can't find the container with id 2c906939264631f5617f60445cdb650e10cc3bf3d0cf16dc4b104f010debfbc1 Mar 13 10:32:44.071499 master-0 kubenswrapper[4091]: W0313 10:32:44.071344 4091 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78c05e1499b533b83f091333d61f045.slice/crio-ecef9696f6ed61b901e54b92a5f3382e4d7c9cf19d60275d449ceb9924469019 WatchSource:0}: Error finding container ecef9696f6ed61b901e54b92a5f3382e4d7c9cf19d60275d449ceb9924469019: Status 404 returned error can't find the container with id ecef9696f6ed61b901e54b92a5f3382e4d7c9cf19d60275d449ceb9924469019 Mar 13 10:32:44.123565 master-0 kubenswrapper[4091]: W0313 10:32:44.123488 4091 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod354f29997baa583b6238f7de9108ee10.slice/crio-bdc4eedb705036ff81733c276b076c49a4edd20b45c63ea797578c8d980a671b WatchSource:0}: Error finding container bdc4eedb705036ff81733c276b076c49a4edd20b45c63ea797578c8d980a671b: Status 404 returned error can't find the container with id bdc4eedb705036ff81733c276b076c49a4edd20b45c63ea797578c8d980a671b Mar 13 10:32:44.207838 master-0 kubenswrapper[4091]: I0313 10:32:44.207664 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"bdc4eedb705036ff81733c276b076c49a4edd20b45c63ea797578c8d980a671b"} Mar 13 10:32:44.208311 master-0 kubenswrapper[4091]: I0313 10:32:44.208261 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"ecef9696f6ed61b901e54b92a5f3382e4d7c9cf19d60275d449ceb9924469019"} Mar 13 10:32:44.208876 master-0 kubenswrapper[4091]: I0313 10:32:44.208833 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"2c906939264631f5617f60445cdb650e10cc3bf3d0cf16dc4b104f010debfbc1"} Mar 13 10:32:44.209648 master-0 kubenswrapper[4091]: I0313 10:32:44.209610 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"3a800ac5d1779f65790cfc04fd054cd45e77032d228f479c2dc831649fa5ed50"} Mar 13 10:32:44.210547 master-0 kubenswrapper[4091]: I0313 10:32:44.210506 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"9681f2e75cd38c3ac67ed3e69a8ec48ca8451d34a1c4febdd60d09ed10b5be76"} Mar 13 10:32:44.911893 master-0 kubenswrapper[4091]: I0313 10:32:44.911832 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:44.933228 master-0 kubenswrapper[4091]: E0313 10:32:44.933142 4091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 13 10:32:45.220298 master-0 kubenswrapper[4091]: W0313 10:32:45.219810 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:45.220298 master-0 kubenswrapper[4091]: E0313 10:32:45.220122 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:45.429440 master-0 kubenswrapper[4091]: I0313 10:32:45.429338 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:45.431052 master-0 kubenswrapper[4091]: I0313 10:32:45.431004 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:45.431140 master-0 kubenswrapper[4091]: I0313 10:32:45.431072 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:45.431140 master-0 kubenswrapper[4091]: I0313 10:32:45.431083 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:45.431212 master-0 kubenswrapper[4091]: I0313 10:32:45.431164 4091 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:32:45.432255 master-0 kubenswrapper[4091]: E0313 10:32:45.432207 4091 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 10:32:45.892838 master-0 kubenswrapper[4091]: W0313 10:32:45.892707 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:45.892838 master-0 kubenswrapper[4091]: E0313 10:32:45.892793 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:45.911811 master-0 kubenswrapper[4091]: I0313 10:32:45.911771 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:46.216638 master-0 kubenswrapper[4091]: I0313 10:32:46.216555 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"a6a13e582092662aa7c7eefb83f8515ba545374741aab1781847bd04e676290b"} Mar 13 10:32:46.217326 master-0 kubenswrapper[4091]: I0313 10:32:46.216694 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:46.218707 master-0 kubenswrapper[4091]: I0313 10:32:46.218234 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:46.218707 master-0 kubenswrapper[4091]: I0313 10:32:46.218272 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:46.218707 master-0 kubenswrapper[4091]: I0313 10:32:46.218282 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:46.224604 master-0 kubenswrapper[4091]: W0313 10:32:46.224509 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:46.224667 master-0 kubenswrapper[4091]: E0313 10:32:46.224636 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:46.517204 master-0 kubenswrapper[4091]: W0313 10:32:46.517024 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:46.517204 master-0 kubenswrapper[4091]: E0313 10:32:46.517134 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:46.912703 master-0 kubenswrapper[4091]: I0313 10:32:46.912656 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:47.220780 master-0 kubenswrapper[4091]: I0313 10:32:47.220173 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"854f2604690570925e6ded05484c1d3ca69a3b566dccd4e395f158c5b0ec2a6b"} Mar 13 10:32:47.222747 master-0 kubenswrapper[4091]: I0313 10:32:47.222678 4091 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="a6a13e582092662aa7c7eefb83f8515ba545374741aab1781847bd04e676290b" exitCode=0 Mar 13 10:32:47.222815 master-0 kubenswrapper[4091]: I0313 10:32:47.222758 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"a6a13e582092662aa7c7eefb83f8515ba545374741aab1781847bd04e676290b"} Mar 13 10:32:47.222815 master-0 kubenswrapper[4091]: I0313 10:32:47.222772 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:47.223984 master-0 kubenswrapper[4091]: I0313 10:32:47.223957 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:47.224050 master-0 kubenswrapper[4091]: I0313 10:32:47.223993 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:47.224050 master-0 kubenswrapper[4091]: I0313 10:32:47.224006 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:47.912973 master-0 kubenswrapper[4091]: I0313 10:32:47.912870 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:48.106616 master-0 kubenswrapper[4091]: I0313 10:32:48.106519 4091 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 10:32:48.108701 master-0 kubenswrapper[4091]: E0313 10:32:48.108646 4091 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:48.134902 master-0 kubenswrapper[4091]: E0313 10:32:48.134820 4091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 13 10:32:48.228225 master-0 kubenswrapper[4091]: I0313 10:32:48.228055 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"fbb8ff7febe388bbeaa88afee0edfc23ebff6c9257eadf9114a42d9cbd2b3ebe"} Mar 13 10:32:48.228225 master-0 kubenswrapper[4091]: I0313 10:32:48.228168 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:48.229231 master-0 kubenswrapper[4091]: I0313 10:32:48.229181 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:48.229270 master-0 kubenswrapper[4091]: I0313 10:32:48.229240 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:48.229270 master-0 kubenswrapper[4091]: I0313 10:32:48.229258 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:48.229875 master-0 kubenswrapper[4091]: I0313 10:32:48.229830 4091 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 13 10:32:48.230459 master-0 kubenswrapper[4091]: I0313 10:32:48.230403 4091 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="473562805fd223c25d262ede7504bd57f67adaedb62dfd22ad9d3b5f5cd8bdf8" exitCode=1 Mar 13 10:32:48.230519 master-0 kubenswrapper[4091]: I0313 10:32:48.230454 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"473562805fd223c25d262ede7504bd57f67adaedb62dfd22ad9d3b5f5cd8bdf8"} Mar 13 10:32:48.230519 master-0 kubenswrapper[4091]: I0313 10:32:48.230508 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:48.231344 master-0 kubenswrapper[4091]: I0313 10:32:48.231303 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:48.231397 master-0 kubenswrapper[4091]: I0313 10:32:48.231346 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:48.231397 master-0 kubenswrapper[4091]: I0313 10:32:48.231358 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:48.231757 master-0 kubenswrapper[4091]: I0313 10:32:48.231723 4091 scope.go:117] "RemoveContainer" containerID="473562805fd223c25d262ede7504bd57f67adaedb62dfd22ad9d3b5f5cd8bdf8" Mar 13 10:32:48.633209 master-0 kubenswrapper[4091]: I0313 10:32:48.633136 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:48.634353 master-0 kubenswrapper[4091]: I0313 10:32:48.634313 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:48.634431 master-0 kubenswrapper[4091]: I0313 10:32:48.634364 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:48.634431 master-0 kubenswrapper[4091]: I0313 10:32:48.634378 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:48.634524 master-0 kubenswrapper[4091]: I0313 10:32:48.634439 4091 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:32:48.636234 master-0 kubenswrapper[4091]: E0313 10:32:48.636195 4091 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 10:32:48.912001 master-0 kubenswrapper[4091]: I0313 10:32:48.911942 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:49.236756 master-0 kubenswrapper[4091]: I0313 10:32:49.236644 4091 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 13 10:32:49.237473 master-0 kubenswrapper[4091]: I0313 10:32:49.237439 4091 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 13 10:32:49.238744 master-0 kubenswrapper[4091]: I0313 10:32:49.238047 4091 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="1a9724b25ed0498bb5b1361a9da6d606e22a0d307ca16a1e9cf368edeaf6be72" exitCode=1 Mar 13 10:32:49.238744 master-0 kubenswrapper[4091]: I0313 10:32:49.238143 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:49.238744 master-0 kubenswrapper[4091]: I0313 10:32:49.238328 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:49.238744 master-0 kubenswrapper[4091]: I0313 10:32:49.238137 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"1a9724b25ed0498bb5b1361a9da6d606e22a0d307ca16a1e9cf368edeaf6be72"} Mar 13 10:32:49.238744 master-0 kubenswrapper[4091]: I0313 10:32:49.238477 4091 scope.go:117] "RemoveContainer" containerID="473562805fd223c25d262ede7504bd57f67adaedb62dfd22ad9d3b5f5cd8bdf8" Mar 13 10:32:49.239955 master-0 kubenswrapper[4091]: I0313 10:32:49.239277 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:49.239955 master-0 kubenswrapper[4091]: I0313 10:32:49.239313 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:49.239955 master-0 kubenswrapper[4091]: I0313 10:32:49.239325 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:49.239955 master-0 kubenswrapper[4091]: I0313 10:32:49.239451 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:49.239955 master-0 kubenswrapper[4091]: I0313 10:32:49.239499 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:49.239955 master-0 kubenswrapper[4091]: I0313 10:32:49.239512 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:49.239955 master-0 kubenswrapper[4091]: I0313 10:32:49.239740 4091 scope.go:117] "RemoveContainer" containerID="1a9724b25ed0498bb5b1361a9da6d606e22a0d307ca16a1e9cf368edeaf6be72" Mar 13 10:32:49.239955 master-0 kubenswrapper[4091]: E0313 10:32:49.239902 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 10:32:49.724881 master-0 kubenswrapper[4091]: W0313 10:32:49.724744 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:49.724881 master-0 kubenswrapper[4091]: E0313 10:32:49.724826 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:49.815087 master-0 kubenswrapper[4091]: E0313 10:32:49.814914 4091 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c600cec83bda5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.901063589 +0000 UTC m=+0.589786061,LastTimestamp:2026-03-13 10:32:41.901063589 +0000 UTC m=+0.589786061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:49.901769 master-0 kubenswrapper[4091]: W0313 10:32:49.901709 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:49.901879 master-0 kubenswrapper[4091]: E0313 10:32:49.901797 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:49.912320 master-0 kubenswrapper[4091]: I0313 10:32:49.912265 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:50.149622 master-0 kubenswrapper[4091]: W0313 10:32:50.149494 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:50.149922 master-0 kubenswrapper[4091]: E0313 10:32:50.149649 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:50.242149 master-0 kubenswrapper[4091]: I0313 10:32:50.242092 4091 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 13 10:32:50.242857 master-0 kubenswrapper[4091]: I0313 10:32:50.242664 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:50.243433 master-0 kubenswrapper[4091]: I0313 10:32:50.243415 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:50.243507 master-0 kubenswrapper[4091]: I0313 10:32:50.243438 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:50.243507 master-0 kubenswrapper[4091]: I0313 10:32:50.243448 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:50.243786 master-0 kubenswrapper[4091]: I0313 10:32:50.243771 4091 scope.go:117] "RemoveContainer" containerID="1a9724b25ed0498bb5b1361a9da6d606e22a0d307ca16a1e9cf368edeaf6be72" Mar 13 10:32:50.243947 master-0 kubenswrapper[4091]: E0313 10:32:50.243910 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 10:32:50.912376 master-0 kubenswrapper[4091]: I0313 10:32:50.912265 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:51.025622 master-0 kubenswrapper[4091]: W0313 10:32:51.025471 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:51.025622 master-0 kubenswrapper[4091]: E0313 10:32:51.025580 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:51.912230 master-0 kubenswrapper[4091]: I0313 10:32:51.912121 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:52.320204 master-0 kubenswrapper[4091]: E0313 10:32:52.320087 4091 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 10:32:52.911926 master-0 kubenswrapper[4091]: I0313 10:32:52.911861 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:53.912292 master-0 kubenswrapper[4091]: I0313 10:32:53.912225 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:54.536547 master-0 kubenswrapper[4091]: E0313 10:32:54.536473 4091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Mar 13 10:32:54.912284 master-0 kubenswrapper[4091]: I0313 10:32:54.912224 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:55.037258 master-0 kubenswrapper[4091]: I0313 10:32:55.037181 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:55.038891 master-0 kubenswrapper[4091]: I0313 10:32:55.038837 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:55.038962 master-0 kubenswrapper[4091]: I0313 10:32:55.038902 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:55.038962 master-0 kubenswrapper[4091]: I0313 10:32:55.038915 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:55.039041 master-0 kubenswrapper[4091]: I0313 10:32:55.038990 4091 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:32:55.040103 master-0 kubenswrapper[4091]: E0313 10:32:55.040058 4091 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 10:32:55.912165 master-0 kubenswrapper[4091]: I0313 10:32:55.912087 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:56.111345 master-0 kubenswrapper[4091]: I0313 10:32:56.111265 4091 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 10:32:56.113055 master-0 kubenswrapper[4091]: E0313 10:32:56.113025 4091 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:56.912372 master-0 kubenswrapper[4091]: I0313 10:32:56.912216 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:57.916625 master-0 kubenswrapper[4091]: I0313 10:32:57.912730 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:57.973888 master-0 kubenswrapper[4091]: W0313 10:32:57.973801 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:57.973888 master-0 kubenswrapper[4091]: E0313 10:32:57.973890 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:58.262664 master-0 kubenswrapper[4091]: I0313 10:32:58.262465 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"80284c850caf3e93eb3675f42a22bd510ddbac6e27d80f3eae83dafefe028254"} Mar 13 10:32:58.264202 master-0 kubenswrapper[4091]: I0313 10:32:58.264133 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:58.264346 master-0 kubenswrapper[4091]: I0313 10:32:58.264155 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"3bbb19054cdef32aad8515587717178e7bce7c315eb6bc762119d4e27dd7a9b0"} Mar 13 10:32:58.265137 master-0 kubenswrapper[4091]: I0313 10:32:58.265101 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:58.265230 master-0 kubenswrapper[4091]: I0313 10:32:58.265143 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:58.265230 master-0 kubenswrapper[4091]: I0313 10:32:58.265154 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:58.266395 master-0 kubenswrapper[4091]: I0313 10:32:58.266364 4091 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="e07b009523c772ee55ecbb89b8fbfc4396d18404079202cc555940a21a0e5f04" exitCode=0 Mar 13 10:32:58.266557 master-0 kubenswrapper[4091]: I0313 10:32:58.266440 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:58.266840 master-0 kubenswrapper[4091]: I0313 10:32:58.266459 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"e07b009523c772ee55ecbb89b8fbfc4396d18404079202cc555940a21a0e5f04"} Mar 13 10:32:58.268091 master-0 kubenswrapper[4091]: I0313 10:32:58.268022 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:58.268271 master-0 kubenswrapper[4091]: I0313 10:32:58.268253 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:58.268396 master-0 kubenswrapper[4091]: I0313 10:32:58.268382 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:58.271536 master-0 kubenswrapper[4091]: I0313 10:32:58.271519 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:58.272647 master-0 kubenswrapper[4091]: I0313 10:32:58.272607 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:58.272647 master-0 kubenswrapper[4091]: I0313 10:32:58.272640 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:58.272647 master-0 kubenswrapper[4091]: I0313 10:32:58.272650 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:58.564250 master-0 kubenswrapper[4091]: W0313 10:32:58.564080 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 10:32:58.564250 master-0 kubenswrapper[4091]: E0313 10:32:58.564200 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 10:32:59.271832 master-0 kubenswrapper[4091]: I0313 10:32:59.271771 4091 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="80284c850caf3e93eb3675f42a22bd510ddbac6e27d80f3eae83dafefe028254" exitCode=1 Mar 13 10:32:59.272543 master-0 kubenswrapper[4091]: I0313 10:32:59.271864 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"80284c850caf3e93eb3675f42a22bd510ddbac6e27d80f3eae83dafefe028254"} Mar 13 10:32:59.274512 master-0 kubenswrapper[4091]: I0313 10:32:59.274451 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"ade40bcd87bcb5b50e27312debdd70388bd7803a0fa485aae78b3cece367b239"} Mar 13 10:32:59.274512 master-0 kubenswrapper[4091]: I0313 10:32:59.274502 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:32:59.275795 master-0 kubenswrapper[4091]: I0313 10:32:59.275771 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:32:59.275858 master-0 kubenswrapper[4091]: I0313 10:32:59.275814 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:32:59.275858 master-0 kubenswrapper[4091]: I0313 10:32:59.275829 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:32:59.834177 master-0 kubenswrapper[4091]: E0313 10:32:59.830253 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cec83bda5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.901063589 +0000 UTC m=+0.589786061,LastTimestamp:2026-03-13 10:32:41.901063589 +0000 UTC m=+0.589786061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.834177 master-0 kubenswrapper[4091]: I0313 10:32:59.831108 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:32:59.839038 master-0 kubenswrapper[4091]: E0313 10:32:59.838902 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e6949a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97465001 +0000 UTC m=+0.663372502,LastTimestamp:2026-03-13 10:32:41.97465001 +0000 UTC m=+0.663372502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.844370 master-0 kubenswrapper[4091]: E0313 10:32:59.844207 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" event="&Event{ObjectMeta:{master-0.189c600cf0e72374 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97468658 +0000 UTC m=+0.663409042,LastTimestamp:2026-03-13 10:32:41.97468658 +0000 UTC m=+0.663409042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.865721 master-0 kubenswrapper[4091]: E0313 10:32:59.865128 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e757fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97470003 +0000 UTC m=+0.663422492,LastTimestamp:2026-03-13 10:32:41.97470003 +0000 UTC m=+0.663422492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.877439 master-0 kubenswrapper[4091]: E0313 10:32:59.877284 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600d0573f3dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:42.319459292 +0000 UTC m=+1.008181764,LastTimestamp:2026-03-13 10:32:42.319459292 +0000 UTC m=+1.008181764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.890272 master-0 kubenswrapper[4091]: E0313 10:32:59.889676 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e6949a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e6949a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97465001 +0000 UTC m=+0.663372502,LastTimestamp:2026-03-13 10:32:42.419635855 +0000 UTC m=+1.108358317,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.895548 master-0 kubenswrapper[4091]: E0313 10:32:59.895374 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e72374\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e72374 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97468658 +0000 UTC m=+0.663409042,LastTimestamp:2026-03-13 10:32:42.419660685 +0000 UTC m=+1.108383147,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.899860 master-0 kubenswrapper[4091]: E0313 10:32:59.899687 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e757fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e757fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97470003 +0000 UTC m=+0.663422492,LastTimestamp:2026-03-13 10:32:42.419675825 +0000 UTC m=+1.108398287,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.905197 master-0 kubenswrapper[4091]: E0313 10:32:59.903849 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e6949a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e6949a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97465001 +0000 UTC m=+0.663372502,LastTimestamp:2026-03-13 10:32:42.505692599 +0000 UTC m=+1.194415061,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.908700 master-0 kubenswrapper[4091]: E0313 10:32:59.908230 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e72374\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e72374 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97468658 +0000 UTC m=+0.663409042,LastTimestamp:2026-03-13 10:32:42.505708909 +0000 UTC m=+1.194431371,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.913835 master-0 kubenswrapper[4091]: E0313 10:32:59.913569 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e757fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e757fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97470003 +0000 UTC m=+0.663422492,LastTimestamp:2026-03-13 10:32:42.505736349 +0000 UTC m=+1.194458811,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.913835 master-0 kubenswrapper[4091]: I0313 10:32:59.913681 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:32:59.915810 master-0 kubenswrapper[4091]: E0313 10:32:59.915295 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e6949a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e6949a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97465001 +0000 UTC m=+0.663372502,LastTimestamp:2026-03-13 10:32:42.506895281 +0000 UTC m=+1.195617773,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.916327 master-0 kubenswrapper[4091]: W0313 10:32:59.916166 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 13 10:32:59.916327 master-0 kubenswrapper[4091]: E0313 10:32:59.916206 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 10:32:59.917234 master-0 kubenswrapper[4091]: E0313 10:32:59.917164 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e72374\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e72374 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97468658 +0000 UTC m=+0.663409042,LastTimestamp:2026-03-13 10:32:42.506949321 +0000 UTC m=+1.195671813,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.920186 master-0 kubenswrapper[4091]: E0313 10:32:59.920097 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e757fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e757fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97470003 +0000 UTC m=+0.663422492,LastTimestamp:2026-03-13 10:32:42.506968381 +0000 UTC m=+1.195690873,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.924031 master-0 kubenswrapper[4091]: E0313 10:32:59.923938 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e6949a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e6949a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97465001 +0000 UTC m=+0.663372502,LastTimestamp:2026-03-13 10:32:42.508116293 +0000 UTC m=+1.196838795,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.929695 master-0 kubenswrapper[4091]: E0313 10:32:59.928887 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e72374\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e72374 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97468658 +0000 UTC m=+0.663409042,LastTimestamp:2026-03-13 10:32:42.508143563 +0000 UTC m=+1.196866055,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.935078 master-0 kubenswrapper[4091]: E0313 10:32:59.934938 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e757fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e757fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97470003 +0000 UTC m=+0.663422492,LastTimestamp:2026-03-13 10:32:42.508189923 +0000 UTC m=+1.196912415,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.940259 master-0 kubenswrapper[4091]: E0313 10:32:59.940136 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e6949a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e6949a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97465001 +0000 UTC m=+0.663372502,LastTimestamp:2026-03-13 10:32:42.50869826 +0000 UTC m=+1.197420762,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.947243 master-0 kubenswrapper[4091]: E0313 10:32:59.947083 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e72374\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e72374 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97468658 +0000 UTC m=+0.663409042,LastTimestamp:2026-03-13 10:32:42.508729239 +0000 UTC m=+1.197451741,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.953107 master-0 kubenswrapper[4091]: E0313 10:32:59.952997 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e757fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e757fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97470003 +0000 UTC m=+0.663422492,LastTimestamp:2026-03-13 10:32:42.508745009 +0000 UTC m=+1.197467501,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.958382 master-0 kubenswrapper[4091]: E0313 10:32:59.958182 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e6949a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e6949a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97465001 +0000 UTC m=+0.663372502,LastTimestamp:2026-03-13 10:32:42.508890128 +0000 UTC m=+1.197612630,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.965181 master-0 kubenswrapper[4091]: E0313 10:32:59.965020 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e72374\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e72374 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97468658 +0000 UTC m=+0.663409042,LastTimestamp:2026-03-13 10:32:42.508941808 +0000 UTC m=+1.197664310,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.970518 master-0 kubenswrapper[4091]: E0313 10:32:59.970371 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e757fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e757fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97470003 +0000 UTC m=+0.663422492,LastTimestamp:2026-03-13 10:32:42.508958108 +0000 UTC m=+1.197680600,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.979477 master-0 kubenswrapper[4091]: E0313 10:32:59.979298 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e6949a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e6949a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97465001 +0000 UTC m=+0.663372502,LastTimestamp:2026-03-13 10:32:42.509991921 +0000 UTC m=+1.198714433,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.984676 master-0 kubenswrapper[4091]: E0313 10:32:59.984547 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c600cf0e72374\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c600cf0e72374 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:41.97468658 +0000 UTC m=+0.663409042,LastTimestamp:2026-03-13 10:32:42.510023871 +0000 UTC m=+1.198746363,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.990475 master-0 kubenswrapper[4091]: E0313 10:32:59.990347 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c600d67b8c853 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:43.968137299 +0000 UTC m=+2.656859751,LastTimestamp:2026-03-13 10:32:43.968137299 +0000 UTC m=+2.656859751,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.995274 master-0 kubenswrapper[4091]: E0313 10:32:59.995166 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c600d68dcba82 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:43.987270274 +0000 UTC m=+2.675992736,LastTimestamp:2026-03-13 10:32:43.987270274 +0000 UTC m=+2.675992736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:32:59.999534 master-0 kubenswrapper[4091]: E0313 10:32:59.999449 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600d6ae33148 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:44.021248328 +0000 UTC m=+2.709970790,LastTimestamp:2026-03-13 10:32:44.021248328 +0000 UTC m=+2.709970790,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.003095 master-0 kubenswrapper[4091]: E0313 10:33:00.003035 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c600d6dfa8a4a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:44.07311009 +0000 UTC m=+2.761832552,LastTimestamp:2026-03-13 10:32:44.07311009 +0000 UTC m=+2.761832552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.008731 master-0 kubenswrapper[4091]: E0313 10:33:00.008481 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c600d711b2aa9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:44.125579945 +0000 UTC m=+2.814302397,LastTimestamp:2026-03-13 10:32:44.125579945 +0000 UTC m=+2.814302397,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.039187 master-0 kubenswrapper[4091]: E0313 10:33:00.036353 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600dd311b658 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" in 1.747s (1.747s including waiting). Image size: 465086330 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:45.769127512 +0000 UTC m=+4.457849984,LastTimestamp:2026-03-13 10:32:45.769127512 +0000 UTC m=+4.457849984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.049620 master-0 kubenswrapper[4091]: E0313 10:33:00.047045 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600ddfbae63a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:45.981541946 +0000 UTC m=+4.670264398,LastTimestamp:2026-03-13 10:32:45.981541946 +0000 UTC m=+4.670264398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.051307 master-0 kubenswrapper[4091]: E0313 10:33:00.051235 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600de080a1a2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:45.994500514 +0000 UTC m=+4.683222976,LastTimestamp:2026-03-13 10:32:45.994500514 +0000 UTC m=+4.683222976,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.056232 master-0 kubenswrapper[4091]: E0313 10:33:00.056158 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c600e144897a5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" in 2.737s (2.737s including waiting). Image size: 529324693 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:46.863243173 +0000 UTC m=+5.551965635,LastTimestamp:2026-03-13 10:32:46.863243173 +0000 UTC m=+5.551965635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.060918 master-0 kubenswrapper[4091]: E0313 10:33:00.060753 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c600e20267825 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:47.062333477 +0000 UTC m=+5.751055959,LastTimestamp:2026-03-13 10:32:47.062333477 +0000 UTC m=+5.751055959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.066006 master-0 kubenswrapper[4091]: E0313 10:33:00.065798 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c600e20ff27a7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:47.076534183 +0000 UTC m=+5.765256645,LastTimestamp:2026-03-13 10:32:47.076534183 +0000 UTC m=+5.765256645,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.071355 master-0 kubenswrapper[4091]: E0313 10:33:00.071199 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c600e21228e54 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:47.078854228 +0000 UTC m=+5.767576690,LastTimestamp:2026-03-13 10:32:47.078854228 +0000 UTC m=+5.767576690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.077582 master-0 kubenswrapper[4091]: E0313 10:33:00.077415 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600e2a07784e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:47.228074062 +0000 UTC m=+5.916796564,LastTimestamp:2026-03-13 10:32:47.228074062 +0000 UTC m=+5.916796564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.083100 master-0 kubenswrapper[4091]: E0313 10:33:00.082821 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c600e3c4b45c6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:47.534507462 +0000 UTC m=+6.223229924,LastTimestamp:2026-03-13 10:32:47.534507462 +0000 UTC m=+6.223229924,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.088141 master-0 kubenswrapper[4091]: E0313 10:33:00.087903 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600e3c790078 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:47.537504376 +0000 UTC m=+6.226226838,LastTimestamp:2026-03-13 10:32:47.537504376 +0000 UTC m=+6.226226838,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.092701 master-0 kubenswrapper[4091]: E0313 10:33:00.092489 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c600e3e2f7a48 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:47.566240328 +0000 UTC m=+6.254962800,LastTimestamp:2026-03-13 10:32:47.566240328 +0000 UTC m=+6.254962800,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.097498 master-0 kubenswrapper[4091]: E0313 10:33:00.097351 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600e3e6ff4fd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:47.570466045 +0000 UTC m=+6.259188507,LastTimestamp:2026-03-13 10:32:47.570466045 +0000 UTC m=+6.259188507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.102913 master-0 kubenswrapper[4091]: E0313 10:33:00.102707 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c600e2a07784e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600e2a07784e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:47.228074062 +0000 UTC m=+5.916796564,LastTimestamp:2026-03-13 10:32:48.235118491 +0000 UTC m=+6.923840953,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.108026 master-0 kubenswrapper[4091]: E0313 10:33:00.107862 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c600e3c790078\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600e3c790078 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:47.537504376 +0000 UTC m=+6.226226838,LastTimestamp:2026-03-13 10:32:48.716251127 +0000 UTC m=+7.404973589,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.113187 master-0 kubenswrapper[4091]: E0313 10:33:00.112866 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c600e3e6ff4fd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600e3e6ff4fd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:47.570466045 +0000 UTC m=+6.259188507,LastTimestamp:2026-03-13 10:32:48.789997064 +0000 UTC m=+7.478719526,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.119004 master-0 kubenswrapper[4091]: E0313 10:33:00.118860 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600ea1f124e5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:49.239876837 +0000 UTC m=+7.928599299,LastTimestamp:2026-03-13 10:32:49.239876837 +0000 UTC m=+7.928599299,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.123961 master-0 kubenswrapper[4091]: E0313 10:33:00.123808 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c600ea1f124e5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600ea1f124e5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:49.239876837 +0000 UTC m=+7.928599299,LastTimestamp:2026-03-13 10:32:50.243890803 +0000 UTC m=+8.932613265,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.149439 master-0 kubenswrapper[4091]: E0313 10:33:00.148412 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c60108b356691 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 13.375s (13.375s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:57.448408721 +0000 UTC m=+16.137131183,LastTimestamp:2026-03-13 10:32:57.448408721 +0000 UTC m=+16.137131183,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.158423 master-0 kubenswrapper[4091]: E0313 10:33:00.158226 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c60108ceb0423 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 13.489s (13.489s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:57.477088291 +0000 UTC m=+16.165810743,LastTimestamp:2026-03-13 10:32:57.477088291 +0000 UTC m=+16.165810743,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.168626 master-0 kubenswrapper[4091]: E0313 10:33:00.163974 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c60108dcad3b8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 13.523s (13.523s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:57.49175596 +0000 UTC m=+16.180478422,LastTimestamp:2026-03-13 10:32:57.49175596 +0000 UTC m=+16.180478422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.168626 master-0 kubenswrapper[4091]: E0313 10:33:00.168378 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c6010965ac361 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:57.635406689 +0000 UTC m=+16.324129151,LastTimestamp:2026-03-13 10:32:57.635406689 +0000 UTC m=+16.324129151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.173030 master-0 kubenswrapper[4091]: E0313 10:33:00.172848 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c6010965aff11 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:57.635421969 +0000 UTC m=+16.324144431,LastTimestamp:2026-03-13 10:32:57.635421969 +0000 UTC m=+16.324144431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.177676 master-0 kubenswrapper[4091]: E0313 10:33:00.177568 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c601096ff0d59 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:57.646173529 +0000 UTC m=+16.334896001,LastTimestamp:2026-03-13 10:32:57.646173529 +0000 UTC m=+16.334896001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.183043 master-0 kubenswrapper[4091]: E0313 10:33:00.182839 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c60109715e909 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:57.647671561 +0000 UTC m=+16.336394023,LastTimestamp:2026-03-13 10:32:57.647671561 +0000 UTC m=+16.336394023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.187222 master-0 kubenswrapper[4091]: E0313 10:33:00.187121 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c6010976782ea kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:57.65301937 +0000 UTC m=+16.341741832,LastTimestamp:2026-03-13 10:32:57.65301937 +0000 UTC m=+16.341741832,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.191203 master-0 kubenswrapper[4091]: E0313 10:33:00.191073 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c601098063750 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:57.66342024 +0000 UTC m=+16.352142722,LastTimestamp:2026-03-13 10:32:57.66342024 +0000 UTC m=+16.352142722,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.194938 master-0 kubenswrapper[4091]: E0313 10:33:00.194840 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c601098b60ef2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:57.674944242 +0000 UTC m=+16.363666714,LastTimestamp:2026-03-13 10:32:57.674944242 +0000 UTC m=+16.363666714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.199505 master-0 kubenswrapper[4091]: E0313 10:33:00.199330 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c6010bc438d89 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:58.271419785 +0000 UTC m=+16.960142257,LastTimestamp:2026-03-13 10:32:58.271419785 +0000 UTC m=+16.960142257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.203817 master-0 kubenswrapper[4091]: E0313 10:33:00.203721 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c6010c9e933ab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:58.500379563 +0000 UTC m=+17.189102025,LastTimestamp:2026-03-13 10:32:58.500379563 +0000 UTC m=+17.189102025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.208220 master-0 kubenswrapper[4091]: E0313 10:33:00.208092 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c6010ca89bc5d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:58.510900317 +0000 UTC m=+17.199622769,LastTimestamp:2026-03-13 10:32:58.510900317 +0000 UTC m=+17.199622769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.215061 master-0 kubenswrapper[4091]: E0313 10:33:00.214978 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c6010ca9840d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:58.511851733 +0000 UTC m=+17.200574195,LastTimestamp:2026-03-13 10:32:58.511851733 +0000 UTC m=+17.200574195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.338755 master-0 kubenswrapper[4091]: E0313 10:33:00.338535 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c601137209756 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\" in 2.685s (2.685s including waiting). Image size: 505242594 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:33:00.332726102 +0000 UTC m=+19.021448564,LastTimestamp:2026-03-13 10:33:00.332726102 +0000 UTC m=+19.021448564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.524858 master-0 kubenswrapper[4091]: E0313 10:33:00.524677 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c6011422f25f0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:33:00.518229488 +0000 UTC m=+19.206951950,LastTimestamp:2026-03-13 10:33:00.518229488 +0000 UTC m=+19.206951950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.542229 master-0 kubenswrapper[4091]: E0313 10:33:00.541786 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c601143486aae kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:33:00.536662702 +0000 UTC m=+19.225385164,LastTimestamp:2026-03-13 10:33:00.536662702 +0000 UTC m=+19.225385164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:00.930901 master-0 kubenswrapper[4091]: I0313 10:33:00.930833 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:01.016042 master-0 kubenswrapper[4091]: W0313 10:33:01.015886 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 13 10:33:01.016042 master-0 kubenswrapper[4091]: E0313 10:33:01.015963 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 10:33:01.294971 master-0 kubenswrapper[4091]: I0313 10:33:01.294764 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"e9000808717ea9c0e3216e703e0ba1564b42f55e959843c60a49ae0e4eb9a8e7"} Mar 13 10:33:01.294971 master-0 kubenswrapper[4091]: I0313 10:33:01.294928 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:01.295646 master-0 kubenswrapper[4091]: I0313 10:33:01.295618 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:01.295706 master-0 kubenswrapper[4091]: I0313 10:33:01.295648 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:01.295706 master-0 kubenswrapper[4091]: I0313 10:33:01.295662 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:01.295951 master-0 kubenswrapper[4091]: I0313 10:33:01.295920 4091 scope.go:117] "RemoveContainer" containerID="80284c850caf3e93eb3675f42a22bd510ddbac6e27d80f3eae83dafefe028254" Mar 13 10:33:01.434767 master-0 kubenswrapper[4091]: E0313 10:33:01.434608 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c6011787862a4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:33:01.42899882 +0000 UTC m=+20.117721282,LastTimestamp:2026-03-13 10:33:01.42899882 +0000 UTC m=+20.117721282,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:01.482053 master-0 kubenswrapper[4091]: E0313 10:33:01.481915 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c60117b4e76ba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" in 2.964s (2.964s including waiting). Image size: 514980169 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:33:01.476583098 +0000 UTC m=+20.165305560,LastTimestamp:2026-03-13 10:33:01.476583098 +0000 UTC m=+20.165305560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:01.544469 master-0 kubenswrapper[4091]: E0313 10:33:01.544428 4091 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 10:33:01.582402 master-0 kubenswrapper[4091]: E0313 10:33:01.582279 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189c6010965ac361\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c6010965ac361 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:57.635406689 +0000 UTC m=+16.324129151,LastTimestamp:2026-03-13 10:33:01.57663397 +0000 UTC m=+20.265356432,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:01.595567 master-0 kubenswrapper[4091]: E0313 10:33:01.595422 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189c601096ff0d59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c601096ff0d59 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:57.646173529 +0000 UTC m=+16.334896001,LastTimestamp:2026-03-13 10:33:01.589267032 +0000 UTC m=+20.277989494,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:01.641522 master-0 kubenswrapper[4091]: E0313 10:33:01.641390 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c601184cd43bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:33:01.635888063 +0000 UTC m=+20.324610525,LastTimestamp:2026-03-13 10:33:01.635888063 +0000 UTC m=+20.324610525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:01.651415 master-0 kubenswrapper[4091]: E0313 10:33:01.651211 4091 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c6011856357ec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:33:01.645723628 +0000 UTC m=+20.334446110,LastTimestamp:2026-03-13 10:33:01.645723628 +0000 UTC m=+20.334446110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:01.916171 master-0 kubenswrapper[4091]: I0313 10:33:01.915954 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:02.041280 master-0 kubenswrapper[4091]: I0313 10:33:02.041158 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:02.042987 master-0 kubenswrapper[4091]: I0313 10:33:02.042929 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:02.043059 master-0 kubenswrapper[4091]: I0313 10:33:02.043001 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:02.043059 master-0 kubenswrapper[4091]: I0313 10:33:02.043019 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:02.043152 master-0 kubenswrapper[4091]: I0313 10:33:02.043092 4091 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:33:02.048334 master-0 kubenswrapper[4091]: E0313 10:33:02.048281 4091 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 13 10:33:02.300489 master-0 kubenswrapper[4091]: I0313 10:33:02.300416 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"b25246b87fe6711f1f7c66db1d40e94041f17222319c643c72a0f13f39f94ce3"} Mar 13 10:33:02.300804 master-0 kubenswrapper[4091]: I0313 10:33:02.300544 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:02.301770 master-0 kubenswrapper[4091]: I0313 10:33:02.301742 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:02.301827 master-0 kubenswrapper[4091]: I0313 10:33:02.301798 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:02.301827 master-0 kubenswrapper[4091]: I0313 10:33:02.301814 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:02.303143 master-0 kubenswrapper[4091]: I0313 10:33:02.303073 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"0a02a1eb2e8e166b8ba4ad221ecbd690f6cbf9e334b441e5c5096bb8f331c40f"} Mar 13 10:33:02.303218 master-0 kubenswrapper[4091]: I0313 10:33:02.303165 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:02.304037 master-0 kubenswrapper[4091]: I0313 10:33:02.303991 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:02.304037 master-0 kubenswrapper[4091]: I0313 10:33:02.304039 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:02.304130 master-0 kubenswrapper[4091]: I0313 10:33:02.304049 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:02.320767 master-0 kubenswrapper[4091]: E0313 10:33:02.320622 4091 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 10:33:02.473674 master-0 kubenswrapper[4091]: I0313 10:33:02.473562 4091 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:33:02.504457 master-0 kubenswrapper[4091]: I0313 10:33:02.504375 4091 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:33:02.513671 master-0 kubenswrapper[4091]: I0313 10:33:02.513627 4091 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:33:02.916268 master-0 kubenswrapper[4091]: I0313 10:33:02.916183 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:03.305995 master-0 kubenswrapper[4091]: I0313 10:33:03.305869 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:03.306393 master-0 kubenswrapper[4091]: I0313 10:33:03.305880 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:03.306393 master-0 kubenswrapper[4091]: I0313 10:33:03.306070 4091 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:33:03.307145 master-0 kubenswrapper[4091]: I0313 10:33:03.307090 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:03.307145 master-0 kubenswrapper[4091]: I0313 10:33:03.307134 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:03.308136 master-0 kubenswrapper[4091]: I0313 10:33:03.307152 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:03.308136 master-0 kubenswrapper[4091]: I0313 10:33:03.307089 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:03.308136 master-0 kubenswrapper[4091]: I0313 10:33:03.307223 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:03.308136 master-0 kubenswrapper[4091]: I0313 10:33:03.307242 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:03.311137 master-0 kubenswrapper[4091]: I0313 10:33:03.311062 4091 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:33:03.920633 master-0 kubenswrapper[4091]: I0313 10:33:03.920510 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:04.202150 master-0 kubenswrapper[4091]: I0313 10:33:04.201969 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:04.203560 master-0 kubenswrapper[4091]: I0313 10:33:04.203509 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:04.203665 master-0 kubenswrapper[4091]: I0313 10:33:04.203572 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:04.203665 master-0 kubenswrapper[4091]: I0313 10:33:04.203640 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:04.204257 master-0 kubenswrapper[4091]: I0313 10:33:04.204210 4091 scope.go:117] "RemoveContainer" containerID="1a9724b25ed0498bb5b1361a9da6d606e22a0d307ca16a1e9cf368edeaf6be72" Mar 13 10:33:04.215959 master-0 kubenswrapper[4091]: E0313 10:33:04.215775 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c600e2a07784e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600e2a07784e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:47.228074062 +0000 UTC m=+5.916796564,LastTimestamp:2026-03-13 10:33:04.207307654 +0000 UTC m=+22.896030146,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:04.308386 master-0 kubenswrapper[4091]: I0313 10:33:04.308316 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:04.308386 master-0 kubenswrapper[4091]: I0313 10:33:04.308347 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:04.309338 master-0 kubenswrapper[4091]: I0313 10:33:04.309290 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:04.309338 master-0 kubenswrapper[4091]: I0313 10:33:04.309330 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:04.309338 master-0 kubenswrapper[4091]: I0313 10:33:04.309342 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:04.309778 master-0 kubenswrapper[4091]: I0313 10:33:04.309734 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:04.309778 master-0 kubenswrapper[4091]: I0313 10:33:04.309778 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:04.309886 master-0 kubenswrapper[4091]: I0313 10:33:04.309791 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:04.413218 master-0 kubenswrapper[4091]: E0313 10:33:04.412968 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c600e3c790078\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600e3c790078 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:47.537504376 +0000 UTC m=+6.226226838,LastTimestamp:2026-03-13 10:33:04.406989206 +0000 UTC m=+23.095711668,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:04.426375 master-0 kubenswrapper[4091]: E0313 10:33:04.426208 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c600e3e6ff4fd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600e3e6ff4fd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:47.570466045 +0000 UTC m=+6.259188507,LastTimestamp:2026-03-13 10:33:04.421087469 +0000 UTC m=+23.109809951,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:04.857049 master-0 kubenswrapper[4091]: I0313 10:33:04.856958 4091 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:33:04.861348 master-0 kubenswrapper[4091]: I0313 10:33:04.861287 4091 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:33:04.915997 master-0 kubenswrapper[4091]: I0313 10:33:04.915927 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:05.317667 master-0 kubenswrapper[4091]: I0313 10:33:05.317535 4091 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.318390 4091 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.318813 4091 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="609f0e5551a709b73298eb7117d146c048b1a886bac85012fa0f0c1a2a1cd687" exitCode=1 Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.318885 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"609f0e5551a709b73298eb7117d146c048b1a886bac85012fa0f0c1a2a1cd687"} Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.318951 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.319000 4091 scope.go:117] "RemoveContainer" containerID="1a9724b25ed0498bb5b1361a9da6d606e22a0d307ca16a1e9cf368edeaf6be72" Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.319029 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.319128 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.320271 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.320345 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.320355 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.320406 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.320420 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.320428 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.320504 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.320547 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:05.320964 master-0 kubenswrapper[4091]: I0313 10:33:05.320569 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:05.321927 master-0 kubenswrapper[4091]: I0313 10:33:05.321382 4091 scope.go:117] "RemoveContainer" containerID="609f0e5551a709b73298eb7117d146c048b1a886bac85012fa0f0c1a2a1cd687" Mar 13 10:33:05.321927 master-0 kubenswrapper[4091]: E0313 10:33:05.321564 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 10:33:05.330213 master-0 kubenswrapper[4091]: E0313 10:33:05.330003 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c600ea1f124e5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600ea1f124e5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:49.239876837 +0000 UTC m=+7.928599299,LastTimestamp:2026-03-13 10:33:05.321535823 +0000 UTC m=+24.010258285,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:05.932189 master-0 kubenswrapper[4091]: I0313 10:33:05.932059 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:06.324572 master-0 kubenswrapper[4091]: I0313 10:33:06.324520 4091 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 10:33:06.325476 master-0 kubenswrapper[4091]: I0313 10:33:06.325430 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:06.326673 master-0 kubenswrapper[4091]: I0313 10:33:06.326622 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:06.326712 master-0 kubenswrapper[4091]: I0313 10:33:06.326692 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:06.326748 master-0 kubenswrapper[4091]: I0313 10:33:06.326715 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:06.917451 master-0 kubenswrapper[4091]: I0313 10:33:06.917374 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:07.916633 master-0 kubenswrapper[4091]: I0313 10:33:07.916542 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:08.349050 master-0 kubenswrapper[4091]: I0313 10:33:08.348918 4091 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:33:08.349050 master-0 kubenswrapper[4091]: I0313 10:33:08.349065 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:08.350233 master-0 kubenswrapper[4091]: I0313 10:33:08.350190 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:08.350233 master-0 kubenswrapper[4091]: I0313 10:33:08.350232 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:08.350339 master-0 kubenswrapper[4091]: I0313 10:33:08.350266 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:08.553750 master-0 kubenswrapper[4091]: E0313 10:33:08.553581 4091 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 10:33:08.919925 master-0 kubenswrapper[4091]: I0313 10:33:08.919874 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:09.049402 master-0 kubenswrapper[4091]: I0313 10:33:09.049264 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:09.050978 master-0 kubenswrapper[4091]: I0313 10:33:09.050925 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:09.051051 master-0 kubenswrapper[4091]: I0313 10:33:09.051001 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:09.051051 master-0 kubenswrapper[4091]: I0313 10:33:09.051027 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:09.051153 master-0 kubenswrapper[4091]: I0313 10:33:09.051099 4091 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:33:09.057451 master-0 kubenswrapper[4091]: E0313 10:33:09.057403 4091 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 13 10:33:09.112967 master-0 kubenswrapper[4091]: I0313 10:33:09.112874 4091 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:33:09.126156 master-0 kubenswrapper[4091]: I0313 10:33:09.126063 4091 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:33:09.334039 master-0 kubenswrapper[4091]: I0313 10:33:09.333825 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:09.335419 master-0 kubenswrapper[4091]: I0313 10:33:09.335363 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:09.335546 master-0 kubenswrapper[4091]: I0313 10:33:09.335461 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:09.335546 master-0 kubenswrapper[4091]: I0313 10:33:09.335489 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:09.918530 master-0 kubenswrapper[4091]: I0313 10:33:09.918435 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:10.336246 master-0 kubenswrapper[4091]: I0313 10:33:10.336196 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:10.337505 master-0 kubenswrapper[4091]: I0313 10:33:10.337101 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:10.337505 master-0 kubenswrapper[4091]: I0313 10:33:10.337157 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:10.337505 master-0 kubenswrapper[4091]: I0313 10:33:10.337167 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:10.344351 master-0 kubenswrapper[4091]: I0313 10:33:10.344260 4091 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:33:10.917054 master-0 kubenswrapper[4091]: I0313 10:33:10.916996 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:11.338979 master-0 kubenswrapper[4091]: I0313 10:33:11.338915 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:11.340766 master-0 kubenswrapper[4091]: I0313 10:33:11.340448 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:11.340766 master-0 kubenswrapper[4091]: I0313 10:33:11.340741 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:11.340766 master-0 kubenswrapper[4091]: I0313 10:33:11.340756 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:11.346528 master-0 kubenswrapper[4091]: I0313 10:33:11.346495 4091 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:33:11.920357 master-0 kubenswrapper[4091]: I0313 10:33:11.920281 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:12.321272 master-0 kubenswrapper[4091]: E0313 10:33:12.321159 4091 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 10:33:12.340730 master-0 kubenswrapper[4091]: I0313 10:33:12.340675 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:12.341769 master-0 kubenswrapper[4091]: I0313 10:33:12.341327 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:12.341769 master-0 kubenswrapper[4091]: I0313 10:33:12.341360 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:12.341769 master-0 kubenswrapper[4091]: I0313 10:33:12.341370 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:12.919198 master-0 kubenswrapper[4091]: I0313 10:33:12.919124 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:13.027893 master-0 kubenswrapper[4091]: I0313 10:33:13.027708 4091 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 10:33:13.046734 master-0 kubenswrapper[4091]: I0313 10:33:13.046631 4091 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 13 10:33:13.536883 master-0 kubenswrapper[4091]: W0313 10:33:13.536816 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 13 10:33:13.536883 master-0 kubenswrapper[4091]: E0313 10:33:13.536876 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 10:33:13.917280 master-0 kubenswrapper[4091]: I0313 10:33:13.917104 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:14.235848 master-0 kubenswrapper[4091]: W0313 10:33:14.235670 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:14.235848 master-0 kubenswrapper[4091]: E0313 10:33:14.235771 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 10:33:14.919464 master-0 kubenswrapper[4091]: I0313 10:33:14.919379 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:15.561020 master-0 kubenswrapper[4091]: E0313 10:33:15.560943 4091 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 10:33:15.720892 master-0 kubenswrapper[4091]: W0313 10:33:15.720826 4091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 13 10:33:15.720892 master-0 kubenswrapper[4091]: E0313 10:33:15.720899 4091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 10:33:15.916304 master-0 kubenswrapper[4091]: I0313 10:33:15.916082 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:16.057895 master-0 kubenswrapper[4091]: I0313 10:33:16.057816 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:16.059076 master-0 kubenswrapper[4091]: I0313 10:33:16.059051 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:16.059163 master-0 kubenswrapper[4091]: I0313 10:33:16.059094 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:16.059163 master-0 kubenswrapper[4091]: I0313 10:33:16.059105 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:16.059163 master-0 kubenswrapper[4091]: I0313 10:33:16.059161 4091 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:33:16.064274 master-0 kubenswrapper[4091]: E0313 10:33:16.064238 4091 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 13 10:33:16.203083 master-0 kubenswrapper[4091]: I0313 10:33:16.202904 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:16.204343 master-0 kubenswrapper[4091]: I0313 10:33:16.204292 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:16.204343 master-0 kubenswrapper[4091]: I0313 10:33:16.204328 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:16.204343 master-0 kubenswrapper[4091]: I0313 10:33:16.204356 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:16.204770 master-0 kubenswrapper[4091]: I0313 10:33:16.204740 4091 scope.go:117] "RemoveContainer" containerID="609f0e5551a709b73298eb7117d146c048b1a886bac85012fa0f0c1a2a1cd687" Mar 13 10:33:16.204944 master-0 kubenswrapper[4091]: E0313 10:33:16.204914 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 10:33:16.210321 master-0 kubenswrapper[4091]: E0313 10:33:16.210124 4091 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c600ea1f124e5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c600ea1f124e5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:32:49.239876837 +0000 UTC m=+7.928599299,LastTimestamp:2026-03-13 10:33:16.204873851 +0000 UTC m=+34.893596313,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:33:16.919907 master-0 kubenswrapper[4091]: I0313 10:33:16.919787 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:17.641283 master-0 kubenswrapper[4091]: I0313 10:33:17.641108 4091 csr.go:261] certificate signing request csr-5vmbl is approved, waiting to be issued Mar 13 10:33:17.917108 master-0 kubenswrapper[4091]: I0313 10:33:17.916893 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:18.917376 master-0 kubenswrapper[4091]: I0313 10:33:18.917283 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:19.919614 master-0 kubenswrapper[4091]: I0313 10:33:19.919519 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:20.916517 master-0 kubenswrapper[4091]: I0313 10:33:20.916402 4091 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 10:33:21.574853 master-0 kubenswrapper[4091]: I0313 10:33:21.574700 4091 csr.go:257] certificate signing request csr-5vmbl is issued Mar 13 10:33:21.791278 master-0 kubenswrapper[4091]: I0313 10:33:21.791213 4091 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 13 10:33:21.932451 master-0 kubenswrapper[4091]: I0313 10:33:21.932321 4091 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 10:33:21.949070 master-0 kubenswrapper[4091]: I0313 10:33:21.948991 4091 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 10:33:22.006424 master-0 kubenswrapper[4091]: I0313 10:33:22.006363 4091 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 10:33:22.268916 master-0 kubenswrapper[4091]: I0313 10:33:22.268773 4091 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 10:33:22.268916 master-0 kubenswrapper[4091]: E0313 10:33:22.268826 4091 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 13 10:33:22.289078 master-0 kubenswrapper[4091]: I0313 10:33:22.289008 4091 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 10:33:22.309688 master-0 kubenswrapper[4091]: I0313 10:33:22.309629 4091 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 10:33:22.322115 master-0 kubenswrapper[4091]: E0313 10:33:22.322055 4091 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 10:33:22.367857 master-0 kubenswrapper[4091]: I0313 10:33:22.367799 4091 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 10:33:22.566843 master-0 kubenswrapper[4091]: E0313 10:33:22.566780 4091 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 13 10:33:22.576919 master-0 kubenswrapper[4091]: I0313 10:33:22.576808 4091 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-14 10:24:19 +0000 UTC, rotation deadline is 2026-03-14 06:52:01.433426217 +0000 UTC Mar 13 10:33:22.576919 master-0 kubenswrapper[4091]: I0313 10:33:22.576908 4091 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h18m38.856526036s for next certificate rotation Mar 13 10:33:22.629857 master-0 kubenswrapper[4091]: I0313 10:33:22.629801 4091 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 10:33:22.629857 master-0 kubenswrapper[4091]: E0313 10:33:22.629847 4091 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 13 10:33:22.726615 master-0 kubenswrapper[4091]: I0313 10:33:22.726551 4091 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 10:33:22.742258 master-0 kubenswrapper[4091]: I0313 10:33:22.742195 4091 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 10:33:22.801452 master-0 kubenswrapper[4091]: I0313 10:33:22.801381 4091 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 10:33:23.061009 master-0 kubenswrapper[4091]: I0313 10:33:23.060940 4091 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 10:33:23.061009 master-0 kubenswrapper[4091]: E0313 10:33:23.060999 4091 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 13 10:33:23.065172 master-0 kubenswrapper[4091]: I0313 10:33:23.065118 4091 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:33:23.066448 master-0 kubenswrapper[4091]: I0313 10:33:23.066397 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:33:23.066518 master-0 kubenswrapper[4091]: I0313 10:33:23.066459 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:33:23.066518 master-0 kubenswrapper[4091]: I0313 10:33:23.066472 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:33:23.066618 master-0 kubenswrapper[4091]: I0313 10:33:23.066554 4091 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:33:23.076357 master-0 kubenswrapper[4091]: I0313 10:33:23.076319 4091 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 13 10:33:23.076466 master-0 kubenswrapper[4091]: E0313 10:33:23.076361 4091 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 13 10:33:23.087206 master-0 kubenswrapper[4091]: E0313 10:33:23.087161 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:23.187955 master-0 kubenswrapper[4091]: E0313 10:33:23.187838 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:23.288438 master-0 kubenswrapper[4091]: E0313 10:33:23.288351 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:23.389352 master-0 kubenswrapper[4091]: E0313 10:33:23.389183 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:23.489493 master-0 kubenswrapper[4091]: E0313 10:33:23.489375 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:23.589812 master-0 kubenswrapper[4091]: E0313 10:33:23.589570 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:23.690261 master-0 kubenswrapper[4091]: E0313 10:33:23.690046 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:23.790374 master-0 kubenswrapper[4091]: E0313 10:33:23.790290 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:23.891119 master-0 kubenswrapper[4091]: E0313 10:33:23.891011 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:23.949928 master-0 kubenswrapper[4091]: I0313 10:33:23.949723 4091 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 13 10:33:23.991312 master-0 kubenswrapper[4091]: E0313 10:33:23.991194 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:24.091746 master-0 kubenswrapper[4091]: E0313 10:33:24.091673 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:24.192880 master-0 kubenswrapper[4091]: E0313 10:33:24.192805 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:24.295051 master-0 kubenswrapper[4091]: I0313 10:33:24.294764 4091 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 13 10:33:24.295051 master-0 kubenswrapper[4091]: E0313 10:33:24.294724 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:24.396298 master-0 kubenswrapper[4091]: E0313 10:33:24.396220 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:24.496835 master-0 kubenswrapper[4091]: E0313 10:33:24.496691 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:24.597767 master-0 kubenswrapper[4091]: E0313 10:33:24.597651 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:24.698888 master-0 kubenswrapper[4091]: E0313 10:33:24.698792 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:24.800672 master-0 kubenswrapper[4091]: E0313 10:33:24.800058 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:24.901062 master-0 kubenswrapper[4091]: E0313 10:33:24.900870 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:25.001141 master-0 kubenswrapper[4091]: E0313 10:33:25.001009 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:25.102396 master-0 kubenswrapper[4091]: E0313 10:33:25.102290 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:25.203149 master-0 kubenswrapper[4091]: E0313 10:33:25.202998 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:25.303463 master-0 kubenswrapper[4091]: E0313 10:33:25.303388 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:25.403983 master-0 kubenswrapper[4091]: E0313 10:33:25.403886 4091 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:33:25.448138 master-0 kubenswrapper[4091]: I0313 10:33:25.448091 4091 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 10:33:25.921532 master-0 kubenswrapper[4091]: I0313 10:33:25.921463 4091 apiserver.go:52] "Watching apiserver" Mar 13 10:33:25.926834 master-0 kubenswrapper[4091]: I0313 10:33:25.926762 4091 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 10:33:25.927354 master-0 kubenswrapper[4091]: I0313 10:33:25.927112 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-s68gq","openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z","openshift-network-operator/network-operator-7c649bf6d4-6vpl4"] Mar 13 10:33:25.928673 master-0 kubenswrapper[4091]: I0313 10:33:25.927572 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:33:25.928673 master-0 kubenswrapper[4091]: I0313 10:33:25.927668 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:25.928673 master-0 kubenswrapper[4091]: I0313 10:33:25.927577 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:25.930670 master-0 kubenswrapper[4091]: I0313 10:33:25.930655 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 13 10:33:25.932334 master-0 kubenswrapper[4091]: I0313 10:33:25.932318 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 10:33:25.932575 master-0 kubenswrapper[4091]: I0313 10:33:25.932563 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 10:33:25.933047 master-0 kubenswrapper[4091]: I0313 10:33:25.932838 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 10:33:25.933285 master-0 kubenswrapper[4091]: I0313 10:33:25.933271 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 10:33:25.933613 master-0 kubenswrapper[4091]: I0313 10:33:25.933496 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 10:33:25.933613 master-0 kubenswrapper[4091]: I0313 10:33:25.933541 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 13 10:33:25.933993 master-0 kubenswrapper[4091]: I0313 10:33:25.933942 4091 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 13 10:33:25.934048 master-0 kubenswrapper[4091]: I0313 10:33:25.933996 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 13 10:33:25.937738 master-0 kubenswrapper[4091]: I0313 10:33:25.937424 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 10:33:26.024217 master-0 kubenswrapper[4091]: I0313 10:33:26.024167 4091 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 13 10:33:26.072263 master-0 kubenswrapper[4091]: I0313 10:33:26.072188 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aaf36b4-e556-4723-a624-aa2edc69c301-service-ca\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.072263 master-0 kubenswrapper[4091]: I0313 10:33:26.072270 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-var-run-resolv-conf\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.072578 master-0 kubenswrapper[4091]: I0313 10:33:26.072300 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-sno-bootstrap-files\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.072578 master-0 kubenswrapper[4091]: I0313 10:33:26.072321 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvcj8\" (UniqueName: \"kubernetes.io/projected/b8337424-8677-401d-8c68-b58c7d9ab99a-kube-api-access-bvcj8\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.072578 master-0 kubenswrapper[4091]: I0313 10:33:26.072341 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.072578 master-0 kubenswrapper[4091]: I0313 10:33:26.072361 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.072578 master-0 kubenswrapper[4091]: I0313 10:33:26.072382 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-metrics-tls\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:33:26.072578 master-0 kubenswrapper[4091]: I0313 10:33:26.072502 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-resolv-conf\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.072578 master-0 kubenswrapper[4091]: I0313 10:33:26.072561 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rfpp\" (UniqueName: \"kubernetes.io/projected/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-kube-api-access-8rfpp\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:33:26.072578 master-0 kubenswrapper[4091]: I0313 10:33:26.072609 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.073008 master-0 kubenswrapper[4091]: I0313 10:33:26.072634 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aaf36b4-e556-4723-a624-aa2edc69c301-kube-api-access\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.073008 master-0 kubenswrapper[4091]: I0313 10:33:26.072668 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-host-etc-kube\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:33:26.073008 master-0 kubenswrapper[4091]: I0313 10:33:26.072712 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-ca-bundle\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.173540 master-0 kubenswrapper[4091]: I0313 10:33:26.173338 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-sno-bootstrap-files\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.173540 master-0 kubenswrapper[4091]: I0313 10:33:26.173425 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvcj8\" (UniqueName: \"kubernetes.io/projected/b8337424-8677-401d-8c68-b58c7d9ab99a-kube-api-access-bvcj8\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.173540 master-0 kubenswrapper[4091]: I0313 10:33:26.173478 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.173540 master-0 kubenswrapper[4091]: I0313 10:33:26.173522 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.174129 master-0 kubenswrapper[4091]: I0313 10:33:26.173567 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-metrics-tls\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:33:26.174129 master-0 kubenswrapper[4091]: I0313 10:33:26.173647 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-resolv-conf\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.174129 master-0 kubenswrapper[4091]: I0313 10:33:26.173703 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rfpp\" (UniqueName: \"kubernetes.io/projected/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-kube-api-access-8rfpp\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:33:26.174129 master-0 kubenswrapper[4091]: I0313 10:33:26.173748 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.174129 master-0 kubenswrapper[4091]: I0313 10:33:26.173793 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aaf36b4-e556-4723-a624-aa2edc69c301-kube-api-access\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.174129 master-0 kubenswrapper[4091]: I0313 10:33:26.173833 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-host-etc-kube\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:33:26.174129 master-0 kubenswrapper[4091]: I0313 10:33:26.173876 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-ca-bundle\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.174129 master-0 kubenswrapper[4091]: I0313 10:33:26.173916 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aaf36b4-e556-4723-a624-aa2edc69c301-service-ca\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.174129 master-0 kubenswrapper[4091]: I0313 10:33:26.173964 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-var-run-resolv-conf\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.174129 master-0 kubenswrapper[4091]: I0313 10:33:26.174127 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-var-run-resolv-conf\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.174826 master-0 kubenswrapper[4091]: I0313 10:33:26.174219 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-sno-bootstrap-files\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.174935 master-0 kubenswrapper[4091]: I0313 10:33:26.174897 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.175210 master-0 kubenswrapper[4091]: I0313 10:33:26.175095 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-ca-bundle\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.175537 master-0 kubenswrapper[4091]: E0313 10:33:26.175434 4091 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:33:26.175693 master-0 kubenswrapper[4091]: I0313 10:33:26.175657 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-resolv-conf\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.175784 master-0 kubenswrapper[4091]: E0313 10:33:26.175762 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:33:26.675573225 +0000 UTC m=+45.364295757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:33:26.175858 master-0 kubenswrapper[4091]: I0313 10:33:26.175775 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.176245 master-0 kubenswrapper[4091]: I0313 10:33:26.176175 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-host-etc-kube\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:33:26.176791 master-0 kubenswrapper[4091]: I0313 10:33:26.176706 4091 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 13 10:33:26.176943 master-0 kubenswrapper[4091]: I0313 10:33:26.176882 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aaf36b4-e556-4723-a624-aa2edc69c301-service-ca\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.182991 master-0 kubenswrapper[4091]: I0313 10:33:26.182855 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-metrics-tls\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:33:26.196854 master-0 kubenswrapper[4091]: I0313 10:33:26.196656 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvcj8\" (UniqueName: \"kubernetes.io/projected/b8337424-8677-401d-8c68-b58c7d9ab99a-kube-api-access-bvcj8\") pod \"assisted-installer-controller-s68gq\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.206856 master-0 kubenswrapper[4091]: I0313 10:33:26.206768 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aaf36b4-e556-4723-a624-aa2edc69c301-kube-api-access\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.208385 master-0 kubenswrapper[4091]: I0313 10:33:26.208314 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rfpp\" (UniqueName: \"kubernetes.io/projected/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-kube-api-access-8rfpp\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:33:26.247011 master-0 kubenswrapper[4091]: I0313 10:33:26.246926 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:33:26.272248 master-0 kubenswrapper[4091]: I0313 10:33:26.271784 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:33:26.290434 master-0 kubenswrapper[4091]: W0313 10:33:26.290353 4091 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8337424_8677_401d_8c68_b58c7d9ab99a.slice/crio-3684ce24f4407551543f74ac9f1a5ab3d105e55ba443e4519febf4f030d8826c WatchSource:0}: Error finding container 3684ce24f4407551543f74ac9f1a5ab3d105e55ba443e4519febf4f030d8826c: Status 404 returned error can't find the container with id 3684ce24f4407551543f74ac9f1a5ab3d105e55ba443e4519febf4f030d8826c Mar 13 10:33:26.380031 master-0 kubenswrapper[4091]: I0313 10:33:26.379963 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-s68gq" event={"ID":"b8337424-8677-401d-8c68-b58c7d9ab99a","Type":"ContainerStarted","Data":"3684ce24f4407551543f74ac9f1a5ab3d105e55ba443e4519febf4f030d8826c"} Mar 13 10:33:26.381086 master-0 kubenswrapper[4091]: I0313 10:33:26.381052 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" event={"ID":"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9","Type":"ContainerStarted","Data":"72d184f62fa595f3a7463191ce616e1db275cdc732a1ab006b74065651d152d4"} Mar 13 10:33:26.678906 master-0 kubenswrapper[4091]: I0313 10:33:26.678843 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:26.679320 master-0 kubenswrapper[4091]: E0313 10:33:26.679237 4091 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:33:26.679417 master-0 kubenswrapper[4091]: E0313 10:33:26.679392 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:33:27.679366341 +0000 UTC m=+46.368088803 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:33:27.687459 master-0 kubenswrapper[4091]: I0313 10:33:27.687395 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:27.688406 master-0 kubenswrapper[4091]: E0313 10:33:27.687650 4091 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:33:27.688406 master-0 kubenswrapper[4091]: E0313 10:33:27.687736 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:33:29.687704243 +0000 UTC m=+48.376426705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:33:28.088311 master-0 kubenswrapper[4091]: I0313 10:33:28.085763 4091 csr.go:261] certificate signing request csr-xzj26 is approved, waiting to be issued Mar 13 10:33:28.092628 master-0 kubenswrapper[4091]: I0313 10:33:28.092405 4091 csr.go:257] certificate signing request csr-xzj26 is issued Mar 13 10:33:29.094290 master-0 kubenswrapper[4091]: I0313 10:33:29.094236 4091 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 10:24:19 +0000 UTC, rotation deadline is 2026-03-14 07:52:35.756486769 +0000 UTC Mar 13 10:33:29.095073 master-0 kubenswrapper[4091]: I0313 10:33:29.095006 4091 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 21h19m6.661508084s for next certificate rotation Mar 13 10:33:29.219955 master-0 kubenswrapper[4091]: I0313 10:33:29.219868 4091 scope.go:117] "RemoveContainer" containerID="609f0e5551a709b73298eb7117d146c048b1a886bac85012fa0f0c1a2a1cd687" Mar 13 10:33:29.220335 master-0 kubenswrapper[4091]: I0313 10:33:29.220282 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 13 10:33:29.701661 master-0 kubenswrapper[4091]: I0313 10:33:29.701491 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:29.702194 master-0 kubenswrapper[4091]: E0313 10:33:29.701700 4091 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:33:29.702194 master-0 kubenswrapper[4091]: E0313 10:33:29.701768 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:33:33.701749056 +0000 UTC m=+52.390471518 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:33:30.095824 master-0 kubenswrapper[4091]: I0313 10:33:30.095717 4091 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 10:24:19 +0000 UTC, rotation deadline is 2026-03-14 04:07:35.594104595 +0000 UTC Mar 13 10:33:30.095824 master-0 kubenswrapper[4091]: I0313 10:33:30.095778 4091 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h34m5.498330689s for next certificate rotation Mar 13 10:33:32.742482 master-0 kubenswrapper[4091]: E0313 10:33:32.742266 4091 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:33:32.742482 master-0 kubenswrapper[4091]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,Command:[/bin/bash -c #!/bin/bash Mar 13 10:33:32.742482 master-0 kubenswrapper[4091]: set -o allexport Mar 13 10:33:32.742482 master-0 kubenswrapper[4091]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 13 10:33:32.742482 master-0 kubenswrapper[4091]: source /etc/kubernetes/apiserver-url.env Mar 13 10:33:32.742482 master-0 kubenswrapper[4091]: else Mar 13 10:33:32.742482 master-0 kubenswrapper[4091]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 13 10:33:32.742482 master-0 kubenswrapper[4091]: exit 1 Mar 13 10:33:32.742482 master-0 kubenswrapper[4091]: fi Mar 13 10:33:32.742482 master-0 kubenswrapper[4091]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 13 10:33:32.742482 master-0 kubenswrapper[4091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9242604e78efada5aeb232d73a7963f806b754213f5d92b1dffc9b493d7b5a65,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b19b9d0e5437b0bb19cafc3fb516f654c911cdf11184c0de9a27b43c6b80c9ce,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3aa7c84e73a2a19cc9baca38b7e86dfcde579aa88221647c332c83f047d5ae6d,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bfe4d3125d98cc501d5a529d3ae2497106a2bbb5a6dd06df7c0e0930d168212,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b62afe74fdcb011a4a8c8fa5572dbab2514dda673ae4be4c6beaef92d28216ba,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rfpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7c649bf6d4-6vpl4_openshift-network-operator(1d5f5440-b10c-40ea-9f1a-5f03babc1bd9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:33:32.742482 master-0 kubenswrapper[4091]: > logger="UnhandledError" Mar 13 10:33:32.743801 master-0 kubenswrapper[4091]: E0313 10:33:32.743552 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" podUID="1d5f5440-b10c-40ea-9f1a-5f03babc1bd9" Mar 13 10:33:32.764645 master-0 kubenswrapper[4091]: E0313 10:33:32.764479 4091 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:assisted-installer-controller,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLUSTER_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:cluster-id,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:INVENTORY_URL,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:inventory-url,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:PULL_SECRET_TOKEN,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-secret,},Key:pull-secret-token,Optional:nil,},},},EnvVar{Name:CA_CERT_PATH,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:ca-cert-path,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:SKIP_CERT_VERIFICATION,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:skip-cert-verification,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:OPENSHIFT_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:NOTIFY_NUM_REBOOTS,Value:true,ValueFrom:nil,},EnvVar{Name:HIGH_AVAILABILITY_MODE,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:high-availability-mode,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:CHECK_CLUSTER_VERSION,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:check-cluster-version,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:MUST_GATHER_IMAGE,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:must-gather-image,Optional:*true,},SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-ca-bundle,ReadOnly:false,MountPath:/etc/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-run-resolv-conf,ReadOnly:false,MountPath:/tmp/var-run-resolv.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-resolv-conf,ReadOnly:false,MountPath:/tmp/host-resolv.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sno-bootstrap-files,ReadOnly:false,MountPath:/tmp/bootstrap-secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvcj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000120000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod assisted-installer-controller-s68gq_assisted-installer(b8337424-8677-401d-8c68-b58c7d9ab99a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 13 10:33:32.765922 master-0 kubenswrapper[4091]: E0313 10:33:32.765820 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"assisted-installer-controller\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="assisted-installer/assisted-installer-controller-s68gq" podUID="b8337424-8677-401d-8c68-b58c7d9ab99a" Mar 13 10:33:33.401482 master-0 kubenswrapper[4091]: I0313 10:33:33.401399 4091 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 10:33:33.402641 master-0 kubenswrapper[4091]: I0313 10:33:33.402579 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"9c670eb6abb5de03cd978fcc4efcfd81c65dafc0d610959d205735ca6df3ab91"} Mar 13 10:33:33.406190 master-0 kubenswrapper[4091]: E0313 10:33:33.406123 4091 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:33:33.406190 master-0 kubenswrapper[4091]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,Command:[/bin/bash -c #!/bin/bash Mar 13 10:33:33.406190 master-0 kubenswrapper[4091]: set -o allexport Mar 13 10:33:33.406190 master-0 kubenswrapper[4091]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 13 10:33:33.406190 master-0 kubenswrapper[4091]: source /etc/kubernetes/apiserver-url.env Mar 13 10:33:33.406190 master-0 kubenswrapper[4091]: else Mar 13 10:33:33.406190 master-0 kubenswrapper[4091]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 13 10:33:33.406190 master-0 kubenswrapper[4091]: exit 1 Mar 13 10:33:33.406190 master-0 kubenswrapper[4091]: fi Mar 13 10:33:33.406190 master-0 kubenswrapper[4091]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 13 10:33:33.406190 master-0 kubenswrapper[4091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9242604e78efada5aeb232d73a7963f806b754213f5d92b1dffc9b493d7b5a65,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b19b9d0e5437b0bb19cafc3fb516f654c911cdf11184c0de9a27b43c6b80c9ce,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3aa7c84e73a2a19cc9baca38b7e86dfcde579aa88221647c332c83f047d5ae6d,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bfe4d3125d98cc501d5a529d3ae2497106a2bbb5a6dd06df7c0e0930d168212,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b62afe74fdcb011a4a8c8fa5572dbab2514dda673ae4be4c6beaef92d28216ba,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rfpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7c649bf6d4-6vpl4_openshift-network-operator(1d5f5440-b10c-40ea-9f1a-5f03babc1bd9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:33:33.406190 master-0 kubenswrapper[4091]: > logger="UnhandledError" Mar 13 10:33:33.408970 master-0 kubenswrapper[4091]: E0313 10:33:33.408031 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" podUID="1d5f5440-b10c-40ea-9f1a-5f03babc1bd9" Mar 13 10:33:33.408970 master-0 kubenswrapper[4091]: E0313 10:33:33.408184 4091 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:assisted-installer-controller,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLUSTER_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:cluster-id,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:INVENTORY_URL,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:inventory-url,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:PULL_SECRET_TOKEN,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-secret,},Key:pull-secret-token,Optional:nil,},},},EnvVar{Name:CA_CERT_PATH,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:ca-cert-path,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:SKIP_CERT_VERIFICATION,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:skip-cert-verification,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:OPENSHIFT_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:NOTIFY_NUM_REBOOTS,Value:true,ValueFrom:nil,},EnvVar{Name:HIGH_AVAILABILITY_MODE,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:high-availability-mode,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:CHECK_CLUSTER_VERSION,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:check-cluster-version,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:MUST_GATHER_IMAGE,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:must-gather-image,Optional:*true,},SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-ca-bundle,ReadOnly:false,MountPath:/etc/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-run-resolv-conf,ReadOnly:false,MountPath:/tmp/var-run-resolv.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-resolv-conf,ReadOnly:false,MountPath:/tmp/host-resolv.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sno-bootstrap-files,ReadOnly:false,MountPath:/tmp/bootstrap-secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvcj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000120000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod assisted-installer-controller-s68gq_assisted-installer(b8337424-8677-401d-8c68-b58c7d9ab99a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 13 10:33:33.409529 master-0 kubenswrapper[4091]: E0313 10:33:33.409373 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"assisted-installer-controller\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="assisted-installer/assisted-installer-controller-s68gq" podUID="b8337424-8677-401d-8c68-b58c7d9ab99a" Mar 13 10:33:33.439180 master-0 kubenswrapper[4091]: I0313 10:33:33.439022 4091 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=4.438981806 podStartE2EDuration="4.438981806s" podCreationTimestamp="2026-03-13 10:33:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:33:33.438722089 +0000 UTC m=+52.127444621" watchObservedRunningTime="2026-03-13 10:33:33.438981806 +0000 UTC m=+52.127704308" Mar 13 10:33:33.731185 master-0 kubenswrapper[4091]: I0313 10:33:33.730925 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:33.731185 master-0 kubenswrapper[4091]: E0313 10:33:33.731111 4091 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:33:33.731185 master-0 kubenswrapper[4091]: E0313 10:33:33.731217 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:33:41.731186576 +0000 UTC m=+60.419909068 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:33:41.790309 master-0 kubenswrapper[4091]: I0313 10:33:41.790095 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:41.790309 master-0 kubenswrapper[4091]: E0313 10:33:41.790231 4091 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:33:41.790309 master-0 kubenswrapper[4091]: E0313 10:33:41.790290 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:33:57.790272812 +0000 UTC m=+76.478995274 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:33:46.193636 master-0 kubenswrapper[4091]: I0313 10:33:46.193130 4091 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 10:33:46.206204 master-0 kubenswrapper[4091]: E0313 10:33:46.205113 4091 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:33:46.206204 master-0 kubenswrapper[4091]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,Command:[/bin/bash -c #!/bin/bash Mar 13 10:33:46.206204 master-0 kubenswrapper[4091]: set -o allexport Mar 13 10:33:46.206204 master-0 kubenswrapper[4091]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 13 10:33:46.206204 master-0 kubenswrapper[4091]: source /etc/kubernetes/apiserver-url.env Mar 13 10:33:46.206204 master-0 kubenswrapper[4091]: else Mar 13 10:33:46.206204 master-0 kubenswrapper[4091]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 13 10:33:46.206204 master-0 kubenswrapper[4091]: exit 1 Mar 13 10:33:46.206204 master-0 kubenswrapper[4091]: fi Mar 13 10:33:46.206204 master-0 kubenswrapper[4091]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 13 10:33:46.206204 master-0 kubenswrapper[4091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9242604e78efada5aeb232d73a7963f806b754213f5d92b1dffc9b493d7b5a65,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b19b9d0e5437b0bb19cafc3fb516f654c911cdf11184c0de9a27b43c6b80c9ce,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3aa7c84e73a2a19cc9baca38b7e86dfcde579aa88221647c332c83f047d5ae6d,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bfe4d3125d98cc501d5a529d3ae2497106a2bbb5a6dd06df7c0e0930d168212,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b62afe74fdcb011a4a8c8fa5572dbab2514dda673ae4be4c6beaef92d28216ba,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rfpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7c649bf6d4-6vpl4_openshift-network-operator(1d5f5440-b10c-40ea-9f1a-5f03babc1bd9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:33:46.206204 master-0 kubenswrapper[4091]: > logger="UnhandledError" Mar 13 10:33:46.207360 master-0 kubenswrapper[4091]: E0313 10:33:46.206285 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" podUID="1d5f5440-b10c-40ea-9f1a-5f03babc1bd9" Mar 13 10:33:48.204821 master-0 kubenswrapper[4091]: E0313 10:33:48.204681 4091 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:assisted-installer-controller,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLUSTER_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:cluster-id,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:INVENTORY_URL,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:inventory-url,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:PULL_SECRET_TOKEN,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-secret,},Key:pull-secret-token,Optional:nil,},},},EnvVar{Name:CA_CERT_PATH,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:ca-cert-path,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:SKIP_CERT_VERIFICATION,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:skip-cert-verification,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:OPENSHIFT_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:NOTIFY_NUM_REBOOTS,Value:true,ValueFrom:nil,},EnvVar{Name:HIGH_AVAILABILITY_MODE,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:high-availability-mode,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:CHECK_CLUSTER_VERSION,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:check-cluster-version,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:MUST_GATHER_IMAGE,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:must-gather-image,Optional:*true,},SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-ca-bundle,ReadOnly:false,MountPath:/etc/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-run-resolv-conf,ReadOnly:false,MountPath:/tmp/var-run-resolv.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-resolv-conf,ReadOnly:false,MountPath:/tmp/host-resolv.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sno-bootstrap-files,ReadOnly:false,MountPath:/tmp/bootstrap-secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvcj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000120000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod assisted-installer-controller-s68gq_assisted-installer(b8337424-8677-401d-8c68-b58c7d9ab99a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 13 10:33:48.206160 master-0 kubenswrapper[4091]: E0313 10:33:48.206091 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"assisted-installer-controller\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="assisted-installer/assisted-installer-controller-s68gq" podUID="b8337424-8677-401d-8c68-b58c7d9ab99a" Mar 13 10:33:52.216686 master-0 kubenswrapper[4091]: I0313 10:33:52.216538 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 10:33:52.217718 master-0 kubenswrapper[4091]: W0313 10:33:52.216853 4091 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 13 10:33:57.815300 master-0 kubenswrapper[4091]: I0313 10:33:57.815170 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:33:57.816063 master-0 kubenswrapper[4091]: E0313 10:33:57.815427 4091 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:33:57.816063 master-0 kubenswrapper[4091]: E0313 10:33:57.815581 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:34:29.815542298 +0000 UTC m=+108.504264760 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:34:00.204638 master-0 kubenswrapper[4091]: E0313 10:34:00.204530 4091 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:assisted-installer-controller,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLUSTER_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:cluster-id,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:INVENTORY_URL,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:inventory-url,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:PULL_SECRET_TOKEN,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-secret,},Key:pull-secret-token,Optional:nil,},},},EnvVar{Name:CA_CERT_PATH,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:ca-cert-path,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:SKIP_CERT_VERIFICATION,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:skip-cert-verification,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:OPENSHIFT_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:NOTIFY_NUM_REBOOTS,Value:true,ValueFrom:nil,},EnvVar{Name:HIGH_AVAILABILITY_MODE,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:high-availability-mode,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:CHECK_CLUSTER_VERSION,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:check-cluster-version,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:MUST_GATHER_IMAGE,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:must-gather-image,Optional:*true,},SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-ca-bundle,ReadOnly:false,MountPath:/etc/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-run-resolv-conf,ReadOnly:false,MountPath:/tmp/var-run-resolv.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-resolv-conf,ReadOnly:false,MountPath:/tmp/host-resolv.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sno-bootstrap-files,ReadOnly:false,MountPath:/tmp/bootstrap-secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvcj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000120000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod assisted-installer-controller-s68gq_assisted-installer(b8337424-8677-401d-8c68-b58c7d9ab99a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 13 10:34:00.205816 master-0 kubenswrapper[4091]: E0313 10:34:00.205757 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"assisted-installer-controller\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="assisted-installer/assisted-installer-controller-s68gq" podUID="b8337424-8677-401d-8c68-b58c7d9ab99a" Mar 13 10:34:01.204570 master-0 kubenswrapper[4091]: E0313 10:34:01.204512 4091 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 10:34:01.204570 master-0 kubenswrapper[4091]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,Command:[/bin/bash -c #!/bin/bash Mar 13 10:34:01.204570 master-0 kubenswrapper[4091]: set -o allexport Mar 13 10:34:01.204570 master-0 kubenswrapper[4091]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 13 10:34:01.204570 master-0 kubenswrapper[4091]: source /etc/kubernetes/apiserver-url.env Mar 13 10:34:01.204570 master-0 kubenswrapper[4091]: else Mar 13 10:34:01.204570 master-0 kubenswrapper[4091]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 13 10:34:01.204570 master-0 kubenswrapper[4091]: exit 1 Mar 13 10:34:01.204570 master-0 kubenswrapper[4091]: fi Mar 13 10:34:01.204570 master-0 kubenswrapper[4091]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 13 10:34:01.204570 master-0 kubenswrapper[4091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9242604e78efada5aeb232d73a7963f806b754213f5d92b1dffc9b493d7b5a65,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b19b9d0e5437b0bb19cafc3fb516f654c911cdf11184c0de9a27b43c6b80c9ce,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3aa7c84e73a2a19cc9baca38b7e86dfcde579aa88221647c332c83f047d5ae6d,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bfe4d3125d98cc501d5a529d3ae2497106a2bbb5a6dd06df7c0e0930d168212,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b62afe74fdcb011a4a8c8fa5572dbab2514dda673ae4be4c6beaef92d28216ba,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rfpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7c649bf6d4-6vpl4_openshift-network-operator(1d5f5440-b10c-40ea-9f1a-5f03babc1bd9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 13 10:34:01.204570 master-0 kubenswrapper[4091]: > logger="UnhandledError" Mar 13 10:34:01.206505 master-0 kubenswrapper[4091]: E0313 10:34:01.206470 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" podUID="1d5f5440-b10c-40ea-9f1a-5f03babc1bd9" Mar 13 10:34:04.313784 master-0 kubenswrapper[4091]: I0313 10:34:04.313717 4091 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 10:34:04.498221 master-0 kubenswrapper[4091]: I0313 10:34:04.498129 4091 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 10:34:05.215766 master-0 kubenswrapper[4091]: I0313 10:34:05.215664 4091 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=13.2156409 podStartE2EDuration="13.2156409s" podCreationTimestamp="2026-03-13 10:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:34:03.096723983 +0000 UTC m=+81.785446455" watchObservedRunningTime="2026-03-13 10:34:05.2156409 +0000 UTC m=+83.904363372" Mar 13 10:34:05.216048 master-0 kubenswrapper[4091]: I0313 10:34:05.215815 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 10:34:12.235376 master-0 kubenswrapper[4091]: I0313 10:34:12.235283 4091 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=7.235263294 podStartE2EDuration="7.235263294s" podCreationTimestamp="2026-03-13 10:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:34:12.234987777 +0000 UTC m=+90.923710249" watchObservedRunningTime="2026-03-13 10:34:12.235263294 +0000 UTC m=+90.923985756" Mar 13 10:34:13.209794 master-0 kubenswrapper[4091]: I0313 10:34:13.209729 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 13 10:34:13.220661 master-0 kubenswrapper[4091]: I0313 10:34:13.220577 4091 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 13 10:34:13.258998 master-0 kubenswrapper[4091]: I0313 10:34:13.258901 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 10:34:13.503749 master-0 kubenswrapper[4091]: I0313 10:34:13.503493 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-s68gq" event={"ID":"b8337424-8677-401d-8c68-b58c7d9ab99a","Type":"ContainerStarted","Data":"1bea0672139d7f4dff089e018c1c16d0afb0f3f466924f1394e930cdfd82c0f0"} Mar 13 10:34:14.508235 master-0 kubenswrapper[4091]: I0313 10:34:14.508175 4091 generic.go:334] "Generic (PLEG): container finished" podID="b8337424-8677-401d-8c68-b58c7d9ab99a" containerID="1bea0672139d7f4dff089e018c1c16d0afb0f3f466924f1394e930cdfd82c0f0" exitCode=0 Mar 13 10:34:14.508235 master-0 kubenswrapper[4091]: I0313 10:34:14.508234 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-s68gq" event={"ID":"b8337424-8677-401d-8c68-b58c7d9ab99a","Type":"ContainerDied","Data":"1bea0672139d7f4dff089e018c1c16d0afb0f3f466924f1394e930cdfd82c0f0"} Mar 13 10:34:14.612131 master-0 kubenswrapper[4091]: I0313 10:34:14.612045 4091 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=1.612019757 podStartE2EDuration="1.612019757s" podCreationTimestamp="2026-03-13 10:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:34:14.552701283 +0000 UTC m=+93.241423765" watchObservedRunningTime="2026-03-13 10:34:14.612019757 +0000 UTC m=+93.300742219" Mar 13 10:34:15.524763 master-0 kubenswrapper[4091]: I0313 10:34:15.524716 4091 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:34:15.555886 master-0 kubenswrapper[4091]: I0313 10:34:15.555817 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvcj8\" (UniqueName: \"kubernetes.io/projected/b8337424-8677-401d-8c68-b58c7d9ab99a-kube-api-access-bvcj8\") pod \"b8337424-8677-401d-8c68-b58c7d9ab99a\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " Mar 13 10:34:15.555886 master-0 kubenswrapper[4091]: I0313 10:34:15.555877 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-var-run-resolv-conf\") pod \"b8337424-8677-401d-8c68-b58c7d9ab99a\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " Mar 13 10:34:15.555886 master-0 kubenswrapper[4091]: I0313 10:34:15.555902 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-ca-bundle\") pod \"b8337424-8677-401d-8c68-b58c7d9ab99a\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " Mar 13 10:34:15.556309 master-0 kubenswrapper[4091]: I0313 10:34:15.555925 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-resolv-conf\") pod \"b8337424-8677-401d-8c68-b58c7d9ab99a\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " Mar 13 10:34:15.556309 master-0 kubenswrapper[4091]: I0313 10:34:15.555953 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-sno-bootstrap-files\") pod \"b8337424-8677-401d-8c68-b58c7d9ab99a\" (UID: \"b8337424-8677-401d-8c68-b58c7d9ab99a\") " Mar 13 10:34:15.556309 master-0 kubenswrapper[4091]: I0313 10:34:15.556078 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "b8337424-8677-401d-8c68-b58c7d9ab99a" (UID: "b8337424-8677-401d-8c68-b58c7d9ab99a"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:34:15.556309 master-0 kubenswrapper[4091]: I0313 10:34:15.556136 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "b8337424-8677-401d-8c68-b58c7d9ab99a" (UID: "b8337424-8677-401d-8c68-b58c7d9ab99a"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:34:15.556309 master-0 kubenswrapper[4091]: I0313 10:34:15.556157 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "b8337424-8677-401d-8c68-b58c7d9ab99a" (UID: "b8337424-8677-401d-8c68-b58c7d9ab99a"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:34:15.556309 master-0 kubenswrapper[4091]: I0313 10:34:15.556181 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "b8337424-8677-401d-8c68-b58c7d9ab99a" (UID: "b8337424-8677-401d-8c68-b58c7d9ab99a"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:34:15.561327 master-0 kubenswrapper[4091]: I0313 10:34:15.561254 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8337424-8677-401d-8c68-b58c7d9ab99a-kube-api-access-bvcj8" (OuterVolumeSpecName: "kube-api-access-bvcj8") pod "b8337424-8677-401d-8c68-b58c7d9ab99a" (UID: "b8337424-8677-401d-8c68-b58c7d9ab99a"). InnerVolumeSpecName "kube-api-access-bvcj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:34:15.657124 master-0 kubenswrapper[4091]: I0313 10:34:15.657033 4091 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 13 10:34:15.657124 master-0 kubenswrapper[4091]: I0313 10:34:15.657086 4091 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvcj8\" (UniqueName: \"kubernetes.io/projected/b8337424-8677-401d-8c68-b58c7d9ab99a-kube-api-access-bvcj8\") on node \"master-0\" DevicePath \"\"" Mar 13 10:34:15.657124 master-0 kubenswrapper[4091]: I0313 10:34:15.657095 4091 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 13 10:34:15.657124 master-0 kubenswrapper[4091]: I0313 10:34:15.657105 4091 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 10:34:15.657124 master-0 kubenswrapper[4091]: I0313 10:34:15.657114 4091 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/b8337424-8677-401d-8c68-b58c7d9ab99a-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 13 10:34:16.515786 master-0 kubenswrapper[4091]: I0313 10:34:16.515644 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-s68gq" event={"ID":"b8337424-8677-401d-8c68-b58c7d9ab99a","Type":"ContainerDied","Data":"3684ce24f4407551543f74ac9f1a5ab3d105e55ba443e4519febf4f030d8826c"} Mar 13 10:34:16.516077 master-0 kubenswrapper[4091]: I0313 10:34:16.516060 4091 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3684ce24f4407551543f74ac9f1a5ab3d105e55ba443e4519febf4f030d8826c" Mar 13 10:34:16.516173 master-0 kubenswrapper[4091]: I0313 10:34:16.515702 4091 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:34:16.517873 master-0 kubenswrapper[4091]: I0313 10:34:16.517831 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" event={"ID":"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9","Type":"ContainerStarted","Data":"5e2eaafddd132326dc9e3d7a39739553509b59eb3a4133fcdb22787eb5fde49c"} Mar 13 10:34:16.637386 master-0 kubenswrapper[4091]: I0313 10:34:16.637290 4091 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" podStartSLOduration=45.167288454 podStartE2EDuration="51.637263712s" podCreationTimestamp="2026-03-13 10:33:25 +0000 UTC" firstStartedPulling="2026-03-13 10:33:26.269001442 +0000 UTC m=+44.957723934" lastFinishedPulling="2026-03-13 10:33:32.73897673 +0000 UTC m=+51.427699192" observedRunningTime="2026-03-13 10:34:16.637246962 +0000 UTC m=+95.325969434" watchObservedRunningTime="2026-03-13 10:34:16.637263712 +0000 UTC m=+95.325986174" Mar 13 10:34:18.359939 master-0 kubenswrapper[4091]: I0313 10:34:18.359867 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 10:34:19.861118 master-0 kubenswrapper[4091]: I0313 10:34:19.861034 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-dmzf7"] Mar 13 10:34:19.861883 master-0 kubenswrapper[4091]: E0313 10:34:19.861204 4091 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8337424-8677-401d-8c68-b58c7d9ab99a" containerName="assisted-installer-controller" Mar 13 10:34:19.861883 master-0 kubenswrapper[4091]: I0313 10:34:19.861234 4091 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8337424-8677-401d-8c68-b58c7d9ab99a" containerName="assisted-installer-controller" Mar 13 10:34:19.861883 master-0 kubenswrapper[4091]: I0313 10:34:19.861290 4091 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8337424-8677-401d-8c68-b58c7d9ab99a" containerName="assisted-installer-controller" Mar 13 10:34:19.861883 master-0 kubenswrapper[4091]: I0313 10:34:19.861855 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-dmzf7" Mar 13 10:34:19.891066 master-0 kubenswrapper[4091]: I0313 10:34:19.891019 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89jdz\" (UniqueName: \"kubernetes.io/projected/d917075d-bc69-49b3-acab-c4d496dd04fc-kube-api-access-89jdz\") pod \"mtu-prober-dmzf7\" (UID: \"d917075d-bc69-49b3-acab-c4d496dd04fc\") " pod="openshift-network-operator/mtu-prober-dmzf7" Mar 13 10:34:19.991767 master-0 kubenswrapper[4091]: I0313 10:34:19.991688 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89jdz\" (UniqueName: \"kubernetes.io/projected/d917075d-bc69-49b3-acab-c4d496dd04fc-kube-api-access-89jdz\") pod \"mtu-prober-dmzf7\" (UID: \"d917075d-bc69-49b3-acab-c4d496dd04fc\") " pod="openshift-network-operator/mtu-prober-dmzf7" Mar 13 10:34:20.110076 master-0 kubenswrapper[4091]: I0313 10:34:20.109982 4091 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=2.109926523 podStartE2EDuration="2.109926523s" podCreationTimestamp="2026-03-13 10:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:34:20.10979705 +0000 UTC m=+98.798519522" watchObservedRunningTime="2026-03-13 10:34:20.109926523 +0000 UTC m=+98.798649005" Mar 13 10:34:20.128306 master-0 kubenswrapper[4091]: I0313 10:34:20.128118 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89jdz\" (UniqueName: \"kubernetes.io/projected/d917075d-bc69-49b3-acab-c4d496dd04fc-kube-api-access-89jdz\") pod \"mtu-prober-dmzf7\" (UID: \"d917075d-bc69-49b3-acab-c4d496dd04fc\") " pod="openshift-network-operator/mtu-prober-dmzf7" Mar 13 10:34:20.177250 master-0 kubenswrapper[4091]: I0313 10:34:20.177192 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-dmzf7" Mar 13 10:34:20.527700 master-0 kubenswrapper[4091]: I0313 10:34:20.527650 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-dmzf7" event={"ID":"d917075d-bc69-49b3-acab-c4d496dd04fc","Type":"ContainerStarted","Data":"4f342d2d66294bd06ac08cc498f323a859474645f1865395b674bff6a68af1e6"} Mar 13 10:34:20.527700 master-0 kubenswrapper[4091]: I0313 10:34:20.527701 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-dmzf7" event={"ID":"d917075d-bc69-49b3-acab-c4d496dd04fc","Type":"ContainerStarted","Data":"4dbe88fb4909398ce9a6240667ba14343e79180353202a50737fcc30200eae3a"} Mar 13 10:34:21.533767 master-0 kubenswrapper[4091]: I0313 10:34:21.533713 4091 generic.go:334] "Generic (PLEG): container finished" podID="d917075d-bc69-49b3-acab-c4d496dd04fc" containerID="4f342d2d66294bd06ac08cc498f323a859474645f1865395b674bff6a68af1e6" exitCode=0 Mar 13 10:34:21.534453 master-0 kubenswrapper[4091]: I0313 10:34:21.533851 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-dmzf7" event={"ID":"d917075d-bc69-49b3-acab-c4d496dd04fc","Type":"ContainerDied","Data":"4f342d2d66294bd06ac08cc498f323a859474645f1865395b674bff6a68af1e6"} Mar 13 10:34:22.553633 master-0 kubenswrapper[4091]: I0313 10:34:22.553558 4091 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-dmzf7" Mar 13 10:34:22.613562 master-0 kubenswrapper[4091]: I0313 10:34:22.613476 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89jdz\" (UniqueName: \"kubernetes.io/projected/d917075d-bc69-49b3-acab-c4d496dd04fc-kube-api-access-89jdz\") pod \"d917075d-bc69-49b3-acab-c4d496dd04fc\" (UID: \"d917075d-bc69-49b3-acab-c4d496dd04fc\") " Mar 13 10:34:22.618005 master-0 kubenswrapper[4091]: I0313 10:34:22.617911 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d917075d-bc69-49b3-acab-c4d496dd04fc-kube-api-access-89jdz" (OuterVolumeSpecName: "kube-api-access-89jdz") pod "d917075d-bc69-49b3-acab-c4d496dd04fc" (UID: "d917075d-bc69-49b3-acab-c4d496dd04fc"). InnerVolumeSpecName "kube-api-access-89jdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:34:22.714678 master-0 kubenswrapper[4091]: I0313 10:34:22.714616 4091 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89jdz\" (UniqueName: \"kubernetes.io/projected/d917075d-bc69-49b3-acab-c4d496dd04fc-kube-api-access-89jdz\") on node \"master-0\" DevicePath \"\"" Mar 13 10:34:23.540652 master-0 kubenswrapper[4091]: I0313 10:34:23.540572 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-dmzf7" event={"ID":"d917075d-bc69-49b3-acab-c4d496dd04fc","Type":"ContainerDied","Data":"4dbe88fb4909398ce9a6240667ba14343e79180353202a50737fcc30200eae3a"} Mar 13 10:34:23.540652 master-0 kubenswrapper[4091]: I0313 10:34:23.540645 4091 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dbe88fb4909398ce9a6240667ba14343e79180353202a50737fcc30200eae3a" Mar 13 10:34:23.540957 master-0 kubenswrapper[4091]: I0313 10:34:23.540693 4091 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-dmzf7" Mar 13 10:34:24.627240 master-0 kubenswrapper[4091]: I0313 10:34:24.627168 4091 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-dmzf7"] Mar 13 10:34:24.632660 master-0 kubenswrapper[4091]: I0313 10:34:24.632619 4091 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-dmzf7"] Mar 13 10:34:26.208449 master-0 kubenswrapper[4091]: I0313 10:34:26.208360 4091 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d917075d-bc69-49b3-acab-c4d496dd04fc" path="/var/lib/kubelet/pods/d917075d-bc69-49b3-acab-c4d496dd04fc/volumes" Mar 13 10:34:29.483024 master-0 kubenswrapper[4091]: I0313 10:34:29.482956 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-qng6t"] Mar 13 10:34:29.483918 master-0 kubenswrapper[4091]: E0313 10:34:29.483089 4091 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d917075d-bc69-49b3-acab-c4d496dd04fc" containerName="prober" Mar 13 10:34:29.483918 master-0 kubenswrapper[4091]: I0313 10:34:29.483322 4091 state_mem.go:107] "Deleted CPUSet assignment" podUID="d917075d-bc69-49b3-acab-c4d496dd04fc" containerName="prober" Mar 13 10:34:29.483918 master-0 kubenswrapper[4091]: I0313 10:34:29.483365 4091 memory_manager.go:354] "RemoveStaleState removing state" podUID="d917075d-bc69-49b3-acab-c4d496dd04fc" containerName="prober" Mar 13 10:34:29.483918 master-0 kubenswrapper[4091]: I0313 10:34:29.483699 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.486309 master-0 kubenswrapper[4091]: I0313 10:34:29.486261 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 10:34:29.486447 master-0 kubenswrapper[4091]: I0313 10:34:29.486412 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 10:34:29.486636 master-0 kubenswrapper[4091]: I0313 10:34:29.486607 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 10:34:29.490006 master-0 kubenswrapper[4091]: I0313 10:34:29.489977 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 10:34:29.568906 master-0 kubenswrapper[4091]: I0313 10:34:29.568826 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-etc-kubernetes\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569225 master-0 kubenswrapper[4091]: I0313 10:34:29.568971 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-os-release\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569225 master-0 kubenswrapper[4091]: I0313 10:34:29.569062 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-cnibin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569225 master-0 kubenswrapper[4091]: I0313 10:34:29.569179 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569377 master-0 kubenswrapper[4091]: I0313 10:34:29.569235 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-bin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569377 master-0 kubenswrapper[4091]: I0313 10:34:29.569255 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjvtr\" (UniqueName: \"kubernetes.io/projected/9aa4b44d-f202-4670-afab-44b38960026f-kube-api-access-bjvtr\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569377 master-0 kubenswrapper[4091]: I0313 10:34:29.569306 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-cni-binary-copy\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569377 master-0 kubenswrapper[4091]: I0313 10:34:29.569348 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-kubelet\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569377 master-0 kubenswrapper[4091]: I0313 10:34:29.569364 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-conf-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569377 master-0 kubenswrapper[4091]: I0313 10:34:29.569387 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-system-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569640 master-0 kubenswrapper[4091]: I0313 10:34:29.569431 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-multus-daemon-config\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569640 master-0 kubenswrapper[4091]: I0313 10:34:29.569453 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-socket-dir-parent\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569640 master-0 kubenswrapper[4091]: I0313 10:34:29.569469 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-k8s-cni-cncf-io\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569640 master-0 kubenswrapper[4091]: I0313 10:34:29.569488 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-netns\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569640 master-0 kubenswrapper[4091]: I0313 10:34:29.569507 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-multus\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569640 master-0 kubenswrapper[4091]: I0313 10:34:29.569524 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-hostroot\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.569640 master-0 kubenswrapper[4091]: I0313 10:34:29.569539 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-multus-certs\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670542 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-os-release\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670609 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-etc-kubernetes\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670636 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-cnibin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670652 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670669 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-cni-binary-copy\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670686 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-bin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670702 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjvtr\" (UniqueName: \"kubernetes.io/projected/9aa4b44d-f202-4670-afab-44b38960026f-kube-api-access-bjvtr\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670720 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-kubelet\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670735 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-conf-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670763 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-system-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670781 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-socket-dir-parent\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670796 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-k8s-cni-cncf-io\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670810 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-netns\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670824 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-multus-daemon-config\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670841 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-multus\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670856 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-hostroot\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.670871 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-multus-certs\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.673238 master-0 kubenswrapper[4091]: I0313 10:34:29.672516 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-multus-certs\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.674156 master-0 kubenswrapper[4091]: I0313 10:34:29.672632 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-os-release\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.674156 master-0 kubenswrapper[4091]: I0313 10:34:29.672655 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-etc-kubernetes\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.674156 master-0 kubenswrapper[4091]: I0313 10:34:29.672690 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-cnibin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.674156 master-0 kubenswrapper[4091]: I0313 10:34:29.672917 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.674156 master-0 kubenswrapper[4091]: I0313 10:34:29.673160 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-system-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.674156 master-0 kubenswrapper[4091]: I0313 10:34:29.673315 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-netns\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.674156 master-0 kubenswrapper[4091]: I0313 10:34:29.673571 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-bin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.674156 master-0 kubenswrapper[4091]: I0313 10:34:29.673847 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-kubelet\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.674156 master-0 kubenswrapper[4091]: I0313 10:34:29.673911 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-socket-dir-parent\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.674156 master-0 kubenswrapper[4091]: I0313 10:34:29.673880 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-conf-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.674156 master-0 kubenswrapper[4091]: I0313 10:34:29.673983 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-k8s-cni-cncf-io\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.674156 master-0 kubenswrapper[4091]: I0313 10:34:29.674017 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-multus\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.674156 master-0 kubenswrapper[4091]: I0313 10:34:29.674052 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-hostroot\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.675280 master-0 kubenswrapper[4091]: I0313 10:34:29.675252 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-cni-binary-copy\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.675381 master-0 kubenswrapper[4091]: I0313 10:34:29.675338 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-multus-daemon-config\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.685473 master-0 kubenswrapper[4091]: I0313 10:34:29.685075 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mc5nc"] Mar 13 10:34:29.686731 master-0 kubenswrapper[4091]: I0313 10:34:29.685949 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.688751 master-0 kubenswrapper[4091]: I0313 10:34:29.688676 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 10:34:29.690727 master-0 kubenswrapper[4091]: I0313 10:34:29.690530 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 10:34:29.706688 master-0 kubenswrapper[4091]: I0313 10:34:29.705388 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjvtr\" (UniqueName: \"kubernetes.io/projected/9aa4b44d-f202-4670-afab-44b38960026f-kube-api-access-bjvtr\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.773028 master-0 kubenswrapper[4091]: I0313 10:34:29.772852 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-system-cni-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.773028 master-0 kubenswrapper[4091]: I0313 10:34:29.772970 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.773028 master-0 kubenswrapper[4091]: I0313 10:34:29.772993 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.773028 master-0 kubenswrapper[4091]: I0313 10:34:29.773012 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-cnibin\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.773355 master-0 kubenswrapper[4091]: I0313 10:34:29.773051 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-os-release\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.773355 master-0 kubenswrapper[4091]: I0313 10:34:29.773071 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.773355 master-0 kubenswrapper[4091]: I0313 10:34:29.773095 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6nnz\" (UniqueName: \"kubernetes.io/projected/5843b0d4-a538-4261-b425-598e318c9d07-kube-api-access-r6nnz\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.773355 master-0 kubenswrapper[4091]: I0313 10:34:29.773132 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-binary-copy\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.796901 master-0 kubenswrapper[4091]: I0313 10:34:29.796820 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qng6t" Mar 13 10:34:29.873920 master-0 kubenswrapper[4091]: I0313 10:34:29.873844 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-binary-copy\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.874192 master-0 kubenswrapper[4091]: I0313 10:34:29.874068 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-system-cni-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.874192 master-0 kubenswrapper[4091]: I0313 10:34:29.874102 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:34:29.874192 master-0 kubenswrapper[4091]: I0313 10:34:29.874127 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.874192 master-0 kubenswrapper[4091]: I0313 10:34:29.874146 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.874192 master-0 kubenswrapper[4091]: I0313 10:34:29.874170 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-cnibin\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.874192 master-0 kubenswrapper[4091]: I0313 10:34:29.874187 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-os-release\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.874192 master-0 kubenswrapper[4091]: I0313 10:34:29.874204 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.874410 master-0 kubenswrapper[4091]: I0313 10:34:29.874222 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6nnz\" (UniqueName: \"kubernetes.io/projected/5843b0d4-a538-4261-b425-598e318c9d07-kube-api-access-r6nnz\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.875389 master-0 kubenswrapper[4091]: I0313 10:34:29.874760 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.875389 master-0 kubenswrapper[4091]: I0313 10:34:29.874837 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-os-release\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.875389 master-0 kubenswrapper[4091]: I0313 10:34:29.874863 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-cnibin\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.875389 master-0 kubenswrapper[4091]: E0313 10:34:29.874956 4091 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:34:29.875389 master-0 kubenswrapper[4091]: E0313 10:34:29.875000 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:33.874984983 +0000 UTC m=+172.563707435 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:34:29.875389 master-0 kubenswrapper[4091]: I0313 10:34:29.875005 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-system-cni-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.875389 master-0 kubenswrapper[4091]: I0313 10:34:29.875278 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-binary-copy\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.875389 master-0 kubenswrapper[4091]: I0313 10:34:29.875318 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.876316 master-0 kubenswrapper[4091]: I0313 10:34:29.876283 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:29.894156 master-0 kubenswrapper[4091]: I0313 10:34:29.894111 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6nnz\" (UniqueName: \"kubernetes.io/projected/5843b0d4-a538-4261-b425-598e318c9d07-kube-api-access-r6nnz\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:30.006408 master-0 kubenswrapper[4091]: I0313 10:34:30.006307 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:34:30.020644 master-0 kubenswrapper[4091]: W0313 10:34:30.020569 4091 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5843b0d4_a538_4261_b425_598e318c9d07.slice/crio-03e6a6324c34d7bf4b86e7eced1bfea7054e77f627892ff596f0fda33c1d39e2 WatchSource:0}: Error finding container 03e6a6324c34d7bf4b86e7eced1bfea7054e77f627892ff596f0fda33c1d39e2: Status 404 returned error can't find the container with id 03e6a6324c34d7bf4b86e7eced1bfea7054e77f627892ff596f0fda33c1d39e2 Mar 13 10:34:30.492450 master-0 kubenswrapper[4091]: I0313 10:34:30.492372 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jz2lp"] Mar 13 10:34:30.493546 master-0 kubenswrapper[4091]: I0313 10:34:30.492829 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:30.493546 master-0 kubenswrapper[4091]: E0313 10:34:30.492952 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:30.562626 master-0 kubenswrapper[4091]: I0313 10:34:30.562508 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mc5nc" event={"ID":"5843b0d4-a538-4261-b425-598e318c9d07","Type":"ContainerStarted","Data":"03e6a6324c34d7bf4b86e7eced1bfea7054e77f627892ff596f0fda33c1d39e2"} Mar 13 10:34:30.563621 master-0 kubenswrapper[4091]: I0313 10:34:30.563536 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qng6t" event={"ID":"9aa4b44d-f202-4670-afab-44b38960026f","Type":"ContainerStarted","Data":"05e1407c7b27a4b6e8d757f9a77812ff8adcb8afeba6392964446e6020251829"} Mar 13 10:34:30.590275 master-0 kubenswrapper[4091]: I0313 10:34:30.590188 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:30.590556 master-0 kubenswrapper[4091]: I0313 10:34:30.590318 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56qz6\" (UniqueName: \"kubernetes.io/projected/79bb87a4-8834-4c73-834e-356ccc1f7f9b-kube-api-access-56qz6\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:30.691440 master-0 kubenswrapper[4091]: I0313 10:34:30.691329 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56qz6\" (UniqueName: \"kubernetes.io/projected/79bb87a4-8834-4c73-834e-356ccc1f7f9b-kube-api-access-56qz6\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:30.691440 master-0 kubenswrapper[4091]: I0313 10:34:30.691412 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:30.691787 master-0 kubenswrapper[4091]: E0313 10:34:30.691762 4091 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:34:30.691888 master-0 kubenswrapper[4091]: E0313 10:34:30.691855 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:34:31.191831507 +0000 UTC m=+109.880553969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:34:30.708779 master-0 kubenswrapper[4091]: I0313 10:34:30.708725 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56qz6\" (UniqueName: \"kubernetes.io/projected/79bb87a4-8834-4c73-834e-356ccc1f7f9b-kube-api-access-56qz6\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:31.196917 master-0 kubenswrapper[4091]: I0313 10:34:31.196854 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:31.197186 master-0 kubenswrapper[4091]: E0313 10:34:31.197067 4091 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:34:31.197186 master-0 kubenswrapper[4091]: E0313 10:34:31.197154 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:34:32.19713355 +0000 UTC m=+110.885856012 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:34:32.202084 master-0 kubenswrapper[4091]: I0313 10:34:32.202007 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:32.203140 master-0 kubenswrapper[4091]: E0313 10:34:32.202458 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:32.206204 master-0 kubenswrapper[4091]: I0313 10:34:32.206155 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:32.206333 master-0 kubenswrapper[4091]: E0313 10:34:32.206291 4091 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:34:32.206472 master-0 kubenswrapper[4091]: E0313 10:34:32.206432 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:34:34.206411564 +0000 UTC m=+112.895134026 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:34:33.573506 master-0 kubenswrapper[4091]: I0313 10:34:33.573442 4091 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="1a1885581af587b9ba505d0bc5381467495165cc081fe48fe67060864afa4c7a" exitCode=0 Mar 13 10:34:33.573506 master-0 kubenswrapper[4091]: I0313 10:34:33.573509 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mc5nc" event={"ID":"5843b0d4-a538-4261-b425-598e318c9d07","Type":"ContainerDied","Data":"1a1885581af587b9ba505d0bc5381467495165cc081fe48fe67060864afa4c7a"} Mar 13 10:34:34.202844 master-0 kubenswrapper[4091]: I0313 10:34:34.202752 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:34.203093 master-0 kubenswrapper[4091]: E0313 10:34:34.202956 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:34.221032 master-0 kubenswrapper[4091]: I0313 10:34:34.220956 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:34.221362 master-0 kubenswrapper[4091]: E0313 10:34:34.221129 4091 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:34:34.221362 master-0 kubenswrapper[4091]: E0313 10:34:34.221201 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:34:38.221183197 +0000 UTC m=+116.909905659 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:34:36.202516 master-0 kubenswrapper[4091]: I0313 10:34:36.202393 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:36.205149 master-0 kubenswrapper[4091]: E0313 10:34:36.202556 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:38.203050 master-0 kubenswrapper[4091]: I0313 10:34:38.202238 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:38.203050 master-0 kubenswrapper[4091]: E0313 10:34:38.202404 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:38.255643 master-0 kubenswrapper[4091]: I0313 10:34:38.255547 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:38.255894 master-0 kubenswrapper[4091]: E0313 10:34:38.255782 4091 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:34:38.255894 master-0 kubenswrapper[4091]: E0313 10:34:38.255866 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:34:46.255843989 +0000 UTC m=+124.944566451 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:34:40.202677 master-0 kubenswrapper[4091]: I0313 10:34:40.202283 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:40.202677 master-0 kubenswrapper[4091]: E0313 10:34:40.202483 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:41.896046 master-0 kubenswrapper[4091]: I0313 10:34:41.890986 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn"] Mar 13 10:34:41.896046 master-0 kubenswrapper[4091]: I0313 10:34:41.891421 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:34:41.897794 master-0 kubenswrapper[4091]: I0313 10:34:41.897083 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 10:34:41.897794 master-0 kubenswrapper[4091]: I0313 10:34:41.897349 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 10:34:41.897794 master-0 kubenswrapper[4091]: I0313 10:34:41.897386 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 10:34:41.897794 master-0 kubenswrapper[4091]: I0313 10:34:41.897523 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 10:34:41.897794 master-0 kubenswrapper[4091]: I0313 10:34:41.897656 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 10:34:41.965099 master-0 kubenswrapper[4091]: E0313 10:34:41.965030 4091 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 13 10:34:41.990558 master-0 kubenswrapper[4091]: I0313 10:34:41.990498 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:34:41.990558 master-0 kubenswrapper[4091]: I0313 10:34:41.990559 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:34:41.990558 master-0 kubenswrapper[4091]: I0313 10:34:41.990600 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg69z\" (UniqueName: \"kubernetes.io/projected/1c12a5d5-711f-4663-974c-c4b06e15fc39-kube-api-access-cg69z\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:34:41.990963 master-0 kubenswrapper[4091]: I0313 10:34:41.990661 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:34:42.091899 master-0 kubenswrapper[4091]: I0313 10:34:42.091830 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:34:42.091899 master-0 kubenswrapper[4091]: I0313 10:34:42.091885 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:34:42.091899 master-0 kubenswrapper[4091]: I0313 10:34:42.091913 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:34:42.092242 master-0 kubenswrapper[4091]: I0313 10:34:42.091935 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg69z\" (UniqueName: \"kubernetes.io/projected/1c12a5d5-711f-4663-974c-c4b06e15fc39-kube-api-access-cg69z\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:34:42.093042 master-0 kubenswrapper[4091]: I0313 10:34:42.092997 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:34:42.093573 master-0 kubenswrapper[4091]: I0313 10:34:42.093550 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:34:42.104384 master-0 kubenswrapper[4091]: I0313 10:34:42.099442 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:34:42.107205 master-0 kubenswrapper[4091]: I0313 10:34:42.107138 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t6fzx"] Mar 13 10:34:42.108418 master-0 kubenswrapper[4091]: I0313 10:34:42.108385 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.116934 master-0 kubenswrapper[4091]: I0313 10:34:42.116874 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg69z\" (UniqueName: \"kubernetes.io/projected/1c12a5d5-711f-4663-974c-c4b06e15fc39-kube-api-access-cg69z\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:34:42.119539 master-0 kubenswrapper[4091]: I0313 10:34:42.119353 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 10:34:42.119938 master-0 kubenswrapper[4091]: I0313 10:34:42.119750 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 10:34:42.193344 master-0 kubenswrapper[4091]: I0313 10:34:42.193172 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-cni-netd\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.193344 master-0 kubenswrapper[4091]: I0313 10:34:42.193236 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-slash\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.193344 master-0 kubenswrapper[4091]: I0313 10:34:42.193256 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-env-overrides\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.193344 master-0 kubenswrapper[4091]: I0313 10:34:42.193296 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-node-log\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.193344 master-0 kubenswrapper[4091]: I0313 10:34:42.193319 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-run-netns\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.193735 master-0 kubenswrapper[4091]: I0313 10:34:42.193396 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.193735 master-0 kubenswrapper[4091]: I0313 10:34:42.193427 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-openvswitch\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.193735 master-0 kubenswrapper[4091]: I0313 10:34:42.193454 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-run-ovn-kubernetes\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.193735 master-0 kubenswrapper[4091]: I0313 10:34:42.193480 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-ovnkube-config\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.193735 master-0 kubenswrapper[4091]: I0313 10:34:42.193504 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-kubelet\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.193735 master-0 kubenswrapper[4091]: I0313 10:34:42.193526 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-ovn\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.193735 master-0 kubenswrapper[4091]: I0313 10:34:42.193545 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-systemd-units\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.193735 master-0 kubenswrapper[4091]: I0313 10:34:42.193562 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-etc-openvswitch\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.193735 master-0 kubenswrapper[4091]: I0313 10:34:42.193600 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-cni-bin\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.193735 master-0 kubenswrapper[4091]: I0313 10:34:42.193618 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwsg5\" (UniqueName: \"kubernetes.io/projected/95ce747e-1691-4861-aaa2-892ce2eab47b-kube-api-access-kwsg5\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.194130 master-0 kubenswrapper[4091]: I0313 10:34:42.193790 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-systemd\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.194130 master-0 kubenswrapper[4091]: I0313 10:34:42.193845 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-log-socket\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.194130 master-0 kubenswrapper[4091]: I0313 10:34:42.193899 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95ce747e-1691-4861-aaa2-892ce2eab47b-ovn-node-metrics-cert\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.194130 master-0 kubenswrapper[4091]: I0313 10:34:42.193930 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-ovnkube-script-lib\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.194130 master-0 kubenswrapper[4091]: I0313 10:34:42.193976 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-var-lib-openvswitch\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.202395 master-0 kubenswrapper[4091]: I0313 10:34:42.202328 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:42.203062 master-0 kubenswrapper[4091]: E0313 10:34:42.202987 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:42.210123 master-0 kubenswrapper[4091]: I0313 10:34:42.210056 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:34:42.294756 master-0 kubenswrapper[4091]: I0313 10:34:42.294644 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-etc-openvswitch\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.294756 master-0 kubenswrapper[4091]: I0313 10:34:42.294705 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-cni-bin\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.294756 master-0 kubenswrapper[4091]: I0313 10:34:42.294728 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-systemd\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.294756 master-0 kubenswrapper[4091]: I0313 10:34:42.294748 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-log-socket\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.294756 master-0 kubenswrapper[4091]: I0313 10:34:42.294773 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwsg5\" (UniqueName: \"kubernetes.io/projected/95ce747e-1691-4861-aaa2-892ce2eab47b-kube-api-access-kwsg5\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.294817 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-var-lib-openvswitch\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.294839 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95ce747e-1691-4861-aaa2-892ce2eab47b-ovn-node-metrics-cert\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.294860 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-ovnkube-script-lib\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.294880 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-cni-netd\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.294894 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-slash\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.294909 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-env-overrides\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.294933 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-node-log\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.294951 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-run-netns\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.294971 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.294990 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-openvswitch\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.295005 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-run-ovn-kubernetes\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.295024 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-kubelet\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.295039 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-ovn\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.295053 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-ovnkube-config\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.295069 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-systemd-units\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.295139 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-systemd-units\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.295452 master-0 kubenswrapper[4091]: I0313 10:34:42.295184 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-etc-openvswitch\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.297136 master-0 kubenswrapper[4091]: I0313 10:34:42.295206 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-cni-bin\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.297136 master-0 kubenswrapper[4091]: I0313 10:34:42.295229 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-systemd\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.297136 master-0 kubenswrapper[4091]: I0313 10:34:42.295252 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-log-socket\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.297136 master-0 kubenswrapper[4091]: I0313 10:34:42.295648 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-var-lib-openvswitch\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.298396 master-0 kubenswrapper[4091]: I0313 10:34:42.298351 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95ce747e-1691-4861-aaa2-892ce2eab47b-ovn-node-metrics-cert\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.299016 master-0 kubenswrapper[4091]: I0313 10:34:42.298977 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-ovnkube-script-lib\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.299091 master-0 kubenswrapper[4091]: I0313 10:34:42.299026 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-cni-netd\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.299091 master-0 kubenswrapper[4091]: I0313 10:34:42.299052 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-slash\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.299425 master-0 kubenswrapper[4091]: I0313 10:34:42.299384 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-env-overrides\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.299503 master-0 kubenswrapper[4091]: I0313 10:34:42.299426 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-node-log\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.299503 master-0 kubenswrapper[4091]: I0313 10:34:42.299456 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-run-netns\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.299503 master-0 kubenswrapper[4091]: I0313 10:34:42.299480 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.299503 master-0 kubenswrapper[4091]: I0313 10:34:42.299502 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-openvswitch\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.299804 master-0 kubenswrapper[4091]: I0313 10:34:42.299523 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-run-ovn-kubernetes\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.299804 master-0 kubenswrapper[4091]: I0313 10:34:42.299544 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-kubelet\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.299804 master-0 kubenswrapper[4091]: I0313 10:34:42.299566 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-ovn\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.300012 master-0 kubenswrapper[4091]: I0313 10:34:42.299977 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-ovnkube-config\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.361961 master-0 kubenswrapper[4091]: I0313 10:34:42.361360 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwsg5\" (UniqueName: \"kubernetes.io/projected/95ce747e-1691-4861-aaa2-892ce2eab47b-kube-api-access-kwsg5\") pod \"ovnkube-node-t6fzx\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:42.382729 master-0 kubenswrapper[4091]: E0313 10:34:42.382643 4091 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:34:42.431082 master-0 kubenswrapper[4091]: I0313 10:34:42.430988 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:34:44.203143 master-0 kubenswrapper[4091]: I0313 10:34:44.202747 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:44.203143 master-0 kubenswrapper[4091]: E0313 10:34:44.203036 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:46.202559 master-0 kubenswrapper[4091]: I0313 10:34:46.202452 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:46.203235 master-0 kubenswrapper[4091]: E0313 10:34:46.202692 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:46.328025 master-0 kubenswrapper[4091]: I0313 10:34:46.327943 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:46.328635 master-0 kubenswrapper[4091]: E0313 10:34:46.328200 4091 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:34:46.328635 master-0 kubenswrapper[4091]: E0313 10:34:46.328317 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:35:02.328292604 +0000 UTC m=+141.017015066 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:34:47.384616 master-0 kubenswrapper[4091]: E0313 10:34:47.384469 4091 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:34:48.203020 master-0 kubenswrapper[4091]: I0313 10:34:48.202923 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:48.203328 master-0 kubenswrapper[4091]: E0313 10:34:48.203170 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:50.202177 master-0 kubenswrapper[4091]: I0313 10:34:50.201944 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:50.202177 master-0 kubenswrapper[4091]: E0313 10:34:50.202117 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:50.622076 master-0 kubenswrapper[4091]: I0313 10:34:50.621807 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qng6t" event={"ID":"9aa4b44d-f202-4670-afab-44b38960026f","Type":"ContainerStarted","Data":"31404f5035de2b80a71340691226307e01b3a66d546b4baf65d6f7308fd276a9"} Mar 13 10:34:50.623739 master-0 kubenswrapper[4091]: I0313 10:34:50.623687 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" event={"ID":"1c12a5d5-711f-4663-974c-c4b06e15fc39","Type":"ContainerStarted","Data":"dd10a6ef5a385b54a38071eb12f0e47117a1cb19bb85a87ed5b6ca4c61c449a7"} Mar 13 10:34:50.623739 master-0 kubenswrapper[4091]: I0313 10:34:50.623720 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" event={"ID":"1c12a5d5-711f-4663-974c-c4b06e15fc39","Type":"ContainerStarted","Data":"97a747ef867987de8a139981f17a1b239fcb5c28199b67ab78094a7f8154dc7c"} Mar 13 10:34:50.625607 master-0 kubenswrapper[4091]: I0313 10:34:50.625558 4091 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="f1eb6056de76c4d6a8863b61770ab5ed8e00f850c41514ac1273f8663adc746a" exitCode=0 Mar 13 10:34:50.625664 master-0 kubenswrapper[4091]: I0313 10:34:50.625649 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mc5nc" event={"ID":"5843b0d4-a538-4261-b425-598e318c9d07","Type":"ContainerDied","Data":"f1eb6056de76c4d6a8863b61770ab5ed8e00f850c41514ac1273f8663adc746a"} Mar 13 10:34:50.626565 master-0 kubenswrapper[4091]: I0313 10:34:50.626542 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" event={"ID":"95ce747e-1691-4861-aaa2-892ce2eab47b","Type":"ContainerStarted","Data":"f970075166524785d0812e7d696731dfa941b8a14592ce15d361cb8a9fc71f47"} Mar 13 10:34:51.079689 master-0 kubenswrapper[4091]: I0313 10:34:51.079551 4091 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-qng6t" podStartSLOduration=1.8134921130000001 podStartE2EDuration="22.079531917s" podCreationTimestamp="2026-03-13 10:34:29 +0000 UTC" firstStartedPulling="2026-03-13 10:34:29.813718249 +0000 UTC m=+108.502440701" lastFinishedPulling="2026-03-13 10:34:50.079758033 +0000 UTC m=+128.768480505" observedRunningTime="2026-03-13 10:34:51.079423755 +0000 UTC m=+129.768146237" watchObservedRunningTime="2026-03-13 10:34:51.079531917 +0000 UTC m=+129.768254379" Mar 13 10:34:51.220834 master-0 kubenswrapper[4091]: I0313 10:34:51.220769 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-96vwf"] Mar 13 10:34:51.221515 master-0 kubenswrapper[4091]: I0313 10:34:51.221237 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:34:51.221515 master-0 kubenswrapper[4091]: E0313 10:34:51.221305 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:34:51.382948 master-0 kubenswrapper[4091]: I0313 10:34:51.382781 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gchrx\" (UniqueName: \"kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx\") pod \"network-check-target-96vwf\" (UID: \"803de28e-3b31-4ea2-9b97-87a733635a5c\") " pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:34:51.484042 master-0 kubenswrapper[4091]: I0313 10:34:51.483504 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchrx\" (UniqueName: \"kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx\") pod \"network-check-target-96vwf\" (UID: \"803de28e-3b31-4ea2-9b97-87a733635a5c\") " pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:34:51.515310 master-0 kubenswrapper[4091]: E0313 10:34:51.515103 4091 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:34:51.515310 master-0 kubenswrapper[4091]: E0313 10:34:51.515169 4091 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:34:51.515310 master-0 kubenswrapper[4091]: E0313 10:34:51.515189 4091 projected.go:194] Error preparing data for projected volume kube-api-access-gchrx for pod openshift-network-diagnostics/network-check-target-96vwf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:34:51.515310 master-0 kubenswrapper[4091]: E0313 10:34:51.515272 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx podName:803de28e-3b31-4ea2-9b97-87a733635a5c nodeName:}" failed. No retries permitted until 2026-03-13 10:34:52.015249157 +0000 UTC m=+130.703971619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gchrx" (UniqueName: "kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx") pod "network-check-target-96vwf" (UID: "803de28e-3b31-4ea2-9b97-87a733635a5c") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:34:52.090447 master-0 kubenswrapper[4091]: I0313 10:34:52.089822 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchrx\" (UniqueName: \"kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx\") pod \"network-check-target-96vwf\" (UID: \"803de28e-3b31-4ea2-9b97-87a733635a5c\") " pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:34:52.090447 master-0 kubenswrapper[4091]: E0313 10:34:52.090010 4091 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:34:52.090447 master-0 kubenswrapper[4091]: E0313 10:34:52.090034 4091 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:34:52.090447 master-0 kubenswrapper[4091]: E0313 10:34:52.090048 4091 projected.go:194] Error preparing data for projected volume kube-api-access-gchrx for pod openshift-network-diagnostics/network-check-target-96vwf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:34:52.090447 master-0 kubenswrapper[4091]: E0313 10:34:52.090103 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx podName:803de28e-3b31-4ea2-9b97-87a733635a5c nodeName:}" failed. No retries permitted until 2026-03-13 10:34:53.090088543 +0000 UTC m=+131.778811005 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gchrx" (UniqueName: "kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx") pod "network-check-target-96vwf" (UID: "803de28e-3b31-4ea2-9b97-87a733635a5c") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:34:52.203318 master-0 kubenswrapper[4091]: I0313 10:34:52.202561 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:52.206020 master-0 kubenswrapper[4091]: E0313 10:34:52.204188 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:52.321459 master-0 kubenswrapper[4091]: I0313 10:34:52.320157 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-9z8mk"] Mar 13 10:34:52.321459 master-0 kubenswrapper[4091]: I0313 10:34:52.320825 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:52.325452 master-0 kubenswrapper[4091]: I0313 10:34:52.323495 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 10:34:52.325452 master-0 kubenswrapper[4091]: I0313 10:34:52.324162 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 10:34:52.325638 master-0 kubenswrapper[4091]: I0313 10:34:52.325479 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 10:34:52.326355 master-0 kubenswrapper[4091]: I0313 10:34:52.325968 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 10:34:52.326993 master-0 kubenswrapper[4091]: I0313 10:34:52.326970 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 10:34:52.386025 master-0 kubenswrapper[4091]: E0313 10:34:52.385866 4091 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:34:52.393411 master-0 kubenswrapper[4091]: I0313 10:34:52.393341 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-env-overrides\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:52.393411 master-0 kubenswrapper[4091]: I0313 10:34:52.393400 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:52.393633 master-0 kubenswrapper[4091]: I0313 10:34:52.393514 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-ovnkube-identity-cm\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:52.393746 master-0 kubenswrapper[4091]: I0313 10:34:52.393693 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk4sg\" (UniqueName: \"kubernetes.io/projected/f87662b9-6ac6-44f3-8a16-ff858c2baa91-kube-api-access-zk4sg\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:52.495418 master-0 kubenswrapper[4091]: I0313 10:34:52.495346 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk4sg\" (UniqueName: \"kubernetes.io/projected/f87662b9-6ac6-44f3-8a16-ff858c2baa91-kube-api-access-zk4sg\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:52.495814 master-0 kubenswrapper[4091]: I0313 10:34:52.495708 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-env-overrides\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:52.495876 master-0 kubenswrapper[4091]: I0313 10:34:52.495860 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:52.495931 master-0 kubenswrapper[4091]: I0313 10:34:52.495906 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-ovnkube-identity-cm\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:52.496160 master-0 kubenswrapper[4091]: E0313 10:34:52.496115 4091 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Mar 13 10:34:52.496227 master-0 kubenswrapper[4091]: E0313 10:34:52.496207 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert podName:f87662b9-6ac6-44f3-8a16-ff858c2baa91 nodeName:}" failed. No retries permitted until 2026-03-13 10:34:52.996184406 +0000 UTC m=+131.684906948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert") pod "network-node-identity-9z8mk" (UID: "f87662b9-6ac6-44f3-8a16-ff858c2baa91") : secret "network-node-identity-cert" not found Mar 13 10:34:52.496687 master-0 kubenswrapper[4091]: I0313 10:34:52.496653 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-env-overrides\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:52.498770 master-0 kubenswrapper[4091]: I0313 10:34:52.497757 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-ovnkube-identity-cm\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:52.524037 master-0 kubenswrapper[4091]: I0313 10:34:52.523983 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk4sg\" (UniqueName: \"kubernetes.io/projected/f87662b9-6ac6-44f3-8a16-ff858c2baa91-kube-api-access-zk4sg\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:52.997337 master-0 kubenswrapper[4091]: I0313 10:34:52.997270 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:52.997745 master-0 kubenswrapper[4091]: E0313 10:34:52.997480 4091 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Mar 13 10:34:52.997745 master-0 kubenswrapper[4091]: E0313 10:34:52.997553 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert podName:f87662b9-6ac6-44f3-8a16-ff858c2baa91 nodeName:}" failed. No retries permitted until 2026-03-13 10:34:53.99753322 +0000 UTC m=+132.686255692 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert") pod "network-node-identity-9z8mk" (UID: "f87662b9-6ac6-44f3-8a16-ff858c2baa91") : secret "network-node-identity-cert" not found Mar 13 10:34:53.098966 master-0 kubenswrapper[4091]: I0313 10:34:53.098896 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchrx\" (UniqueName: \"kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx\") pod \"network-check-target-96vwf\" (UID: \"803de28e-3b31-4ea2-9b97-87a733635a5c\") " pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:34:53.099279 master-0 kubenswrapper[4091]: E0313 10:34:53.099219 4091 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:34:53.099279 master-0 kubenswrapper[4091]: E0313 10:34:53.099269 4091 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:34:53.099279 master-0 kubenswrapper[4091]: E0313 10:34:53.099283 4091 projected.go:194] Error preparing data for projected volume kube-api-access-gchrx for pod openshift-network-diagnostics/network-check-target-96vwf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:34:53.099411 master-0 kubenswrapper[4091]: E0313 10:34:53.099344 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx podName:803de28e-3b31-4ea2-9b97-87a733635a5c nodeName:}" failed. No retries permitted until 2026-03-13 10:34:55.099326755 +0000 UTC m=+133.788049217 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gchrx" (UniqueName: "kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx") pod "network-check-target-96vwf" (UID: "803de28e-3b31-4ea2-9b97-87a733635a5c") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:34:53.202191 master-0 kubenswrapper[4091]: I0313 10:34:53.202116 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:34:53.202455 master-0 kubenswrapper[4091]: E0313 10:34:53.202312 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:34:54.009620 master-0 kubenswrapper[4091]: I0313 10:34:54.008849 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:54.012168 master-0 kubenswrapper[4091]: I0313 10:34:54.012111 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:54.142651 master-0 kubenswrapper[4091]: I0313 10:34:54.142393 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:34:54.202947 master-0 kubenswrapper[4091]: I0313 10:34:54.202886 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:54.203307 master-0 kubenswrapper[4091]: E0313 10:34:54.203071 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:55.034552 master-0 kubenswrapper[4091]: I0313 10:34:55.034430 4091 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="c52caffe2e52c9e9297b6c1f2ec3f7f6e6e6506eb77ca1a1569946e8d355217d" exitCode=0 Mar 13 10:34:55.036406 master-0 kubenswrapper[4091]: I0313 10:34:55.034524 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mc5nc" event={"ID":"5843b0d4-a538-4261-b425-598e318c9d07","Type":"ContainerDied","Data":"c52caffe2e52c9e9297b6c1f2ec3f7f6e6e6506eb77ca1a1569946e8d355217d"} Mar 13 10:34:55.038399 master-0 kubenswrapper[4091]: I0313 10:34:55.038308 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-9z8mk" event={"ID":"f87662b9-6ac6-44f3-8a16-ff858c2baa91","Type":"ContainerStarted","Data":"d6da84d85f2972436dd4f3787492391fdf9d5e5e6bdc8d3e5f13761666cdfd3b"} Mar 13 10:34:55.114749 master-0 kubenswrapper[4091]: I0313 10:34:55.114661 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchrx\" (UniqueName: \"kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx\") pod \"network-check-target-96vwf\" (UID: \"803de28e-3b31-4ea2-9b97-87a733635a5c\") " pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:34:55.115055 master-0 kubenswrapper[4091]: E0313 10:34:55.114904 4091 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:34:55.115055 master-0 kubenswrapper[4091]: E0313 10:34:55.114930 4091 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:34:55.115055 master-0 kubenswrapper[4091]: E0313 10:34:55.114944 4091 projected.go:194] Error preparing data for projected volume kube-api-access-gchrx for pod openshift-network-diagnostics/network-check-target-96vwf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:34:55.115055 master-0 kubenswrapper[4091]: E0313 10:34:55.115028 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx podName:803de28e-3b31-4ea2-9b97-87a733635a5c nodeName:}" failed. No retries permitted until 2026-03-13 10:34:59.11500645 +0000 UTC m=+137.803728922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gchrx" (UniqueName: "kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx") pod "network-check-target-96vwf" (UID: "803de28e-3b31-4ea2-9b97-87a733635a5c") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:34:55.202485 master-0 kubenswrapper[4091]: I0313 10:34:55.202383 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:34:55.202976 master-0 kubenswrapper[4091]: E0313 10:34:55.202931 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:34:56.202784 master-0 kubenswrapper[4091]: I0313 10:34:56.202706 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:56.203410 master-0 kubenswrapper[4091]: E0313 10:34:56.202878 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:57.202357 master-0 kubenswrapper[4091]: I0313 10:34:57.202297 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:34:57.202601 master-0 kubenswrapper[4091]: E0313 10:34:57.202446 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:34:57.387381 master-0 kubenswrapper[4091]: E0313 10:34:57.387304 4091 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:34:58.051844 master-0 kubenswrapper[4091]: I0313 10:34:58.051782 4091 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="5aea8eda95c6cad12da786a1a1cc2a69af0868d380d904ea93a9398f7754ee5b" exitCode=0 Mar 13 10:34:58.051844 master-0 kubenswrapper[4091]: I0313 10:34:58.051842 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mc5nc" event={"ID":"5843b0d4-a538-4261-b425-598e318c9d07","Type":"ContainerDied","Data":"5aea8eda95c6cad12da786a1a1cc2a69af0868d380d904ea93a9398f7754ee5b"} Mar 13 10:34:58.203020 master-0 kubenswrapper[4091]: I0313 10:34:58.202940 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:34:58.203315 master-0 kubenswrapper[4091]: E0313 10:34:58.203131 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:34:59.148301 master-0 kubenswrapper[4091]: I0313 10:34:59.148207 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchrx\" (UniqueName: \"kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx\") pod \"network-check-target-96vwf\" (UID: \"803de28e-3b31-4ea2-9b97-87a733635a5c\") " pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:34:59.149395 master-0 kubenswrapper[4091]: E0313 10:34:59.148407 4091 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:34:59.149395 master-0 kubenswrapper[4091]: E0313 10:34:59.148426 4091 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:34:59.149395 master-0 kubenswrapper[4091]: E0313 10:34:59.148438 4091 projected.go:194] Error preparing data for projected volume kube-api-access-gchrx for pod openshift-network-diagnostics/network-check-target-96vwf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:34:59.149395 master-0 kubenswrapper[4091]: E0313 10:34:59.148498 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx podName:803de28e-3b31-4ea2-9b97-87a733635a5c nodeName:}" failed. No retries permitted until 2026-03-13 10:35:07.14848023 +0000 UTC m=+145.837202682 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gchrx" (UniqueName: "kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx") pod "network-check-target-96vwf" (UID: "803de28e-3b31-4ea2-9b97-87a733635a5c") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:34:59.202552 master-0 kubenswrapper[4091]: I0313 10:34:59.202461 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:34:59.202874 master-0 kubenswrapper[4091]: E0313 10:34:59.202660 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:00.202856 master-0 kubenswrapper[4091]: I0313 10:35:00.202418 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:00.204398 master-0 kubenswrapper[4091]: E0313 10:35:00.202995 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:01.202154 master-0 kubenswrapper[4091]: I0313 10:35:01.202107 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:01.202322 master-0 kubenswrapper[4091]: E0313 10:35:01.202292 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:02.205667 master-0 kubenswrapper[4091]: I0313 10:35:02.205620 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:02.206181 master-0 kubenswrapper[4091]: E0313 10:35:02.205717 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:02.374901 master-0 kubenswrapper[4091]: I0313 10:35:02.374816 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:02.375301 master-0 kubenswrapper[4091]: E0313 10:35:02.375044 4091 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:35:02.375301 master-0 kubenswrapper[4091]: E0313 10:35:02.375240 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:35:34.37520739 +0000 UTC m=+173.063929852 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 10:35:02.388673 master-0 kubenswrapper[4091]: E0313 10:35:02.388566 4091 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:35:03.201903 master-0 kubenswrapper[4091]: I0313 10:35:03.201836 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:03.202181 master-0 kubenswrapper[4091]: E0313 10:35:03.202015 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:04.202052 master-0 kubenswrapper[4091]: I0313 10:35:04.201973 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:04.202720 master-0 kubenswrapper[4091]: E0313 10:35:04.202156 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:05.202816 master-0 kubenswrapper[4091]: I0313 10:35:05.202713 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:05.203547 master-0 kubenswrapper[4091]: E0313 10:35:05.202882 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:06.202141 master-0 kubenswrapper[4091]: I0313 10:35:06.202056 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:06.202377 master-0 kubenswrapper[4091]: E0313 10:35:06.202257 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:07.202654 master-0 kubenswrapper[4091]: I0313 10:35:07.202555 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:07.203370 master-0 kubenswrapper[4091]: E0313 10:35:07.202758 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:07.217194 master-0 kubenswrapper[4091]: I0313 10:35:07.217115 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchrx\" (UniqueName: \"kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx\") pod \"network-check-target-96vwf\" (UID: \"803de28e-3b31-4ea2-9b97-87a733635a5c\") " pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:07.217463 master-0 kubenswrapper[4091]: E0313 10:35:07.217395 4091 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:35:07.217463 master-0 kubenswrapper[4091]: E0313 10:35:07.217462 4091 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:35:07.217555 master-0 kubenswrapper[4091]: E0313 10:35:07.217480 4091 projected.go:194] Error preparing data for projected volume kube-api-access-gchrx for pod openshift-network-diagnostics/network-check-target-96vwf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:35:07.217630 master-0 kubenswrapper[4091]: E0313 10:35:07.217561 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx podName:803de28e-3b31-4ea2-9b97-87a733635a5c nodeName:}" failed. No retries permitted until 2026-03-13 10:35:23.217536365 +0000 UTC m=+161.906259007 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gchrx" (UniqueName: "kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx") pod "network-check-target-96vwf" (UID: "803de28e-3b31-4ea2-9b97-87a733635a5c") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:35:07.390627 master-0 kubenswrapper[4091]: E0313 10:35:07.390530 4091 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:35:08.202876 master-0 kubenswrapper[4091]: I0313 10:35:08.202799 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:08.203505 master-0 kubenswrapper[4091]: E0313 10:35:08.202996 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:09.202023 master-0 kubenswrapper[4091]: I0313 10:35:09.201937 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:09.202340 master-0 kubenswrapper[4091]: E0313 10:35:09.202126 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:10.202022 master-0 kubenswrapper[4091]: I0313 10:35:10.201928 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:10.202648 master-0 kubenswrapper[4091]: E0313 10:35:10.202134 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:11.202571 master-0 kubenswrapper[4091]: I0313 10:35:11.202491 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:11.203082 master-0 kubenswrapper[4091]: E0313 10:35:11.202679 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:12.204439 master-0 kubenswrapper[4091]: I0313 10:35:12.202742 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:12.204439 master-0 kubenswrapper[4091]: E0313 10:35:12.203538 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:12.392622 master-0 kubenswrapper[4091]: E0313 10:35:12.392487 4091 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:35:13.202945 master-0 kubenswrapper[4091]: I0313 10:35:13.202849 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:13.203270 master-0 kubenswrapper[4091]: E0313 10:35:13.203052 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:14.202211 master-0 kubenswrapper[4091]: I0313 10:35:14.202107 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:14.202905 master-0 kubenswrapper[4091]: E0313 10:35:14.202292 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:14.316324 master-0 kubenswrapper[4091]: I0313 10:35:14.316259 4091 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t6fzx"] Mar 13 10:35:15.202716 master-0 kubenswrapper[4091]: I0313 10:35:15.202345 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:15.203390 master-0 kubenswrapper[4091]: E0313 10:35:15.202818 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:16.202138 master-0 kubenswrapper[4091]: I0313 10:35:16.202066 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:16.202625 master-0 kubenswrapper[4091]: E0313 10:35:16.202567 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:17.108819 master-0 kubenswrapper[4091]: I0313 10:35:17.108621 4091 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="a71a5f7050d9b50b349f60da266053c0daef17268d0a768624b3f4f70f7f01a0" exitCode=0 Mar 13 10:35:17.108819 master-0 kubenswrapper[4091]: I0313 10:35:17.108728 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mc5nc" event={"ID":"5843b0d4-a538-4261-b425-598e318c9d07","Type":"ContainerDied","Data":"a71a5f7050d9b50b349f60da266053c0daef17268d0a768624b3f4f70f7f01a0"} Mar 13 10:35:17.111639 master-0 kubenswrapper[4091]: I0313 10:35:17.111379 4091 generic.go:334] "Generic (PLEG): container finished" podID="95ce747e-1691-4861-aaa2-892ce2eab47b" containerID="00f5c083a821fbca38cbfd341441afb6be196b84b47649d2ebbc7edabd6bf075" exitCode=0 Mar 13 10:35:17.111639 master-0 kubenswrapper[4091]: I0313 10:35:17.111490 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" event={"ID":"95ce747e-1691-4861-aaa2-892ce2eab47b","Type":"ContainerDied","Data":"00f5c083a821fbca38cbfd341441afb6be196b84b47649d2ebbc7edabd6bf075"} Mar 13 10:35:17.115754 master-0 kubenswrapper[4091]: I0313 10:35:17.113507 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-9z8mk" event={"ID":"f87662b9-6ac6-44f3-8a16-ff858c2baa91","Type":"ContainerStarted","Data":"d2e7a9c17281b6d5f7f20fbe7b128af98dc009aec3115a4cb2ebd1a39090d634"} Mar 13 10:35:17.115754 master-0 kubenswrapper[4091]: I0313 10:35:17.113571 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-9z8mk" event={"ID":"f87662b9-6ac6-44f3-8a16-ff858c2baa91","Type":"ContainerStarted","Data":"91b11f9c708aeb26fa6add1f4edafa995c757f11a6ec8c0817b67e5809c9a88e"} Mar 13 10:35:17.115754 master-0 kubenswrapper[4091]: I0313 10:35:17.115333 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" event={"ID":"1c12a5d5-711f-4663-974c-c4b06e15fc39","Type":"ContainerStarted","Data":"3711f960c560ecb4568aab641312d36db294714abc5c774ce0693e59fb2ba6d8"} Mar 13 10:35:17.125003 master-0 kubenswrapper[4091]: I0313 10:35:17.124948 4091 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:35:17.150299 master-0 kubenswrapper[4091]: I0313 10:35:17.150119 4091 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" podStartSLOduration=9.901745905 podStartE2EDuration="36.150094101s" podCreationTimestamp="2026-03-13 10:34:41 +0000 UTC" firstStartedPulling="2026-03-13 10:34:50.450267909 +0000 UTC m=+129.138990391" lastFinishedPulling="2026-03-13 10:35:16.698616125 +0000 UTC m=+155.387338587" observedRunningTime="2026-03-13 10:35:17.149059153 +0000 UTC m=+155.837781635" watchObservedRunningTime="2026-03-13 10:35:17.150094101 +0000 UTC m=+155.838816563" Mar 13 10:35:17.195354 master-0 kubenswrapper[4091]: I0313 10:35:17.195247 4091 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-9z8mk" podStartSLOduration=2.735139779 podStartE2EDuration="25.195224202s" podCreationTimestamp="2026-03-13 10:34:52 +0000 UTC" firstStartedPulling="2026-03-13 10:34:54.160126848 +0000 UTC m=+132.848849310" lastFinishedPulling="2026-03-13 10:35:16.620211271 +0000 UTC m=+155.308933733" observedRunningTime="2026-03-13 10:35:17.194555934 +0000 UTC m=+155.883278436" watchObservedRunningTime="2026-03-13 10:35:17.195224202 +0000 UTC m=+155.883946674" Mar 13 10:35:17.201975 master-0 kubenswrapper[4091]: I0313 10:35:17.201937 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:17.202156 master-0 kubenswrapper[4091]: E0313 10:35:17.202056 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:17.204555 master-0 kubenswrapper[4091]: I0313 10:35:17.204496 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.204555 master-0 kubenswrapper[4091]: I0313 10:35:17.204548 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-cni-netd\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.204703 master-0 kubenswrapper[4091]: I0313 10:35:17.204566 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-cni-bin\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.204703 master-0 kubenswrapper[4091]: I0313 10:35:17.204617 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-slash\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.204703 master-0 kubenswrapper[4091]: I0313 10:35:17.204649 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwsg5\" (UniqueName: \"kubernetes.io/projected/95ce747e-1691-4861-aaa2-892ce2eab47b-kube-api-access-kwsg5\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.204703 master-0 kubenswrapper[4091]: I0313 10:35:17.204698 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-systemd\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.204866 master-0 kubenswrapper[4091]: I0313 10:35:17.204715 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-kubelet\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.204866 master-0 kubenswrapper[4091]: I0313 10:35:17.204732 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-openvswitch\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.204866 master-0 kubenswrapper[4091]: I0313 10:35:17.204748 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-log-socket\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.204866 master-0 kubenswrapper[4091]: I0313 10:35:17.204768 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-ovnkube-config\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.204866 master-0 kubenswrapper[4091]: I0313 10:35:17.204787 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-node-log\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.204866 master-0 kubenswrapper[4091]: I0313 10:35:17.204805 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95ce747e-1691-4861-aaa2-892ce2eab47b-ovn-node-metrics-cert\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.204866 master-0 kubenswrapper[4091]: I0313 10:35:17.204835 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-run-ovn-kubernetes\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.204866 master-0 kubenswrapper[4091]: I0313 10:35:17.204860 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-systemd-units\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.205145 master-0 kubenswrapper[4091]: I0313 10:35:17.204885 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-ovn\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.205145 master-0 kubenswrapper[4091]: I0313 10:35:17.204914 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-etc-openvswitch\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.205145 master-0 kubenswrapper[4091]: I0313 10:35:17.204935 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-var-lib-openvswitch\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.205145 master-0 kubenswrapper[4091]: I0313 10:35:17.204958 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-run-netns\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.205145 master-0 kubenswrapper[4091]: I0313 10:35:17.204979 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-ovnkube-script-lib\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.205145 master-0 kubenswrapper[4091]: I0313 10:35:17.204999 4091 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-env-overrides\") pod \"95ce747e-1691-4861-aaa2-892ce2eab47b\" (UID: \"95ce747e-1691-4861-aaa2-892ce2eab47b\") " Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.205776 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.205811 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.205841 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.205865 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-node-log" (OuterVolumeSpecName: "node-log") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.205856 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.205890 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.205920 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-slash" (OuterVolumeSpecName: "host-slash") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.205936 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.205967 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.205976 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.205986 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.206014 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.206014 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-log-socket" (OuterVolumeSpecName: "log-socket") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.205992 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206078 master-0 kubenswrapper[4091]: I0313 10:35:17.206041 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:35:17.206735 master-0 kubenswrapper[4091]: I0313 10:35:17.206253 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:35:17.206735 master-0 kubenswrapper[4091]: I0313 10:35:17.206670 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:35:17.206816 master-0 kubenswrapper[4091]: I0313 10:35:17.206783 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.210936 4091 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.210977 4091 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.210995 4091 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211009 4091 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211023 4091 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211037 4091 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211049 4091 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211061 4091 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211073 4091 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211087 4091 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211100 4091 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211112 4091 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95ce747e-1691-4861-aaa2-892ce2eab47b-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211125 4091 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-node-log\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211136 4091 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211149 4091 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211159 4091 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211169 4091 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211253 master-0 kubenswrapper[4091]: I0313 10:35:17.211179 4091 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95ce747e-1691-4861-aaa2-892ce2eab47b-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.211902 master-0 kubenswrapper[4091]: I0313 10:35:17.211493 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95ce747e-1691-4861-aaa2-892ce2eab47b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:35:17.212358 master-0 kubenswrapper[4091]: I0313 10:35:17.212314 4091 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95ce747e-1691-4861-aaa2-892ce2eab47b-kube-api-access-kwsg5" (OuterVolumeSpecName: "kube-api-access-kwsg5") pod "95ce747e-1691-4861-aaa2-892ce2eab47b" (UID: "95ce747e-1691-4861-aaa2-892ce2eab47b"). InnerVolumeSpecName "kube-api-access-kwsg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:35:17.312007 master-0 kubenswrapper[4091]: I0313 10:35:17.311760 4091 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwsg5\" (UniqueName: \"kubernetes.io/projected/95ce747e-1691-4861-aaa2-892ce2eab47b-kube-api-access-kwsg5\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.312007 master-0 kubenswrapper[4091]: I0313 10:35:17.311805 4091 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95ce747e-1691-4861-aaa2-892ce2eab47b-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:17.394054 master-0 kubenswrapper[4091]: E0313 10:35:17.393846 4091 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:35:18.121756 master-0 kubenswrapper[4091]: I0313 10:35:18.121668 4091 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="b9afa0d6c9ded08257918288601275e200a1f5d816485290920a81d0a9149405" exitCode=0 Mar 13 10:35:18.122391 master-0 kubenswrapper[4091]: I0313 10:35:18.121735 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mc5nc" event={"ID":"5843b0d4-a538-4261-b425-598e318c9d07","Type":"ContainerDied","Data":"b9afa0d6c9ded08257918288601275e200a1f5d816485290920a81d0a9149405"} Mar 13 10:35:18.124071 master-0 kubenswrapper[4091]: I0313 10:35:18.123994 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" event={"ID":"95ce747e-1691-4861-aaa2-892ce2eab47b","Type":"ContainerDied","Data":"f970075166524785d0812e7d696731dfa941b8a14592ce15d361cb8a9fc71f47"} Mar 13 10:35:18.124151 master-0 kubenswrapper[4091]: I0313 10:35:18.124075 4091 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t6fzx" Mar 13 10:35:18.124151 master-0 kubenswrapper[4091]: I0313 10:35:18.124099 4091 scope.go:117] "RemoveContainer" containerID="00f5c083a821fbca38cbfd341441afb6be196b84b47649d2ebbc7edabd6bf075" Mar 13 10:35:18.203001 master-0 kubenswrapper[4091]: I0313 10:35:18.202925 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:18.203268 master-0 kubenswrapper[4091]: E0313 10:35:18.203103 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:18.416266 master-0 kubenswrapper[4091]: I0313 10:35:18.416072 4091 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t6fzx"] Mar 13 10:35:18.499907 master-0 kubenswrapper[4091]: I0313 10:35:18.499845 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hztqp"] Mar 13 10:35:18.500203 master-0 kubenswrapper[4091]: E0313 10:35:18.499987 4091 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ce747e-1691-4861-aaa2-892ce2eab47b" containerName="kubecfg-setup" Mar 13 10:35:18.500203 master-0 kubenswrapper[4091]: I0313 10:35:18.500005 4091 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ce747e-1691-4861-aaa2-892ce2eab47b" containerName="kubecfg-setup" Mar 13 10:35:18.500203 master-0 kubenswrapper[4091]: I0313 10:35:18.500048 4091 memory_manager.go:354] "RemoveStaleState removing state" podUID="95ce747e-1691-4861-aaa2-892ce2eab47b" containerName="kubecfg-setup" Mar 13 10:35:18.500865 master-0 kubenswrapper[4091]: I0313 10:35:18.500823 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.502997 master-0 kubenswrapper[4091]: I0313 10:35:18.502916 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 10:35:18.503291 master-0 kubenswrapper[4091]: I0313 10:35:18.503248 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 10:35:18.521190 master-0 kubenswrapper[4091]: I0313 10:35:18.521126 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521190 master-0 kubenswrapper[4091]: I0313 10:35:18.521177 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-env-overrides\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521190 master-0 kubenswrapper[4091]: I0313 10:35:18.521203 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-log-socket\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521486 master-0 kubenswrapper[4091]: I0313 10:35:18.521259 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-slash\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521486 master-0 kubenswrapper[4091]: I0313 10:35:18.521277 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-systemd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521486 master-0 kubenswrapper[4091]: I0313 10:35:18.521312 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-systemd-units\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521486 master-0 kubenswrapper[4091]: I0313 10:35:18.521332 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxvqn\" (UniqueName: \"kubernetes.io/projected/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-kube-api-access-vxvqn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521486 master-0 kubenswrapper[4091]: I0313 10:35:18.521356 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-node-log\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521486 master-0 kubenswrapper[4091]: I0313 10:35:18.521383 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-bin\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521486 master-0 kubenswrapper[4091]: I0313 10:35:18.521411 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521486 master-0 kubenswrapper[4091]: I0313 10:35:18.521456 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovn-node-metrics-cert\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521486 master-0 kubenswrapper[4091]: I0313 10:35:18.521493 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521839 master-0 kubenswrapper[4091]: I0313 10:35:18.521513 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-netd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521839 master-0 kubenswrapper[4091]: I0313 10:35:18.521532 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-config\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521839 master-0 kubenswrapper[4091]: I0313 10:35:18.521553 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-kubelet\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521839 master-0 kubenswrapper[4091]: I0313 10:35:18.521573 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-script-lib\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521839 master-0 kubenswrapper[4091]: I0313 10:35:18.521726 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-var-lib-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521839 master-0 kubenswrapper[4091]: I0313 10:35:18.521777 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-etc-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.521839 master-0 kubenswrapper[4091]: I0313 10:35:18.521810 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-netns\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.522190 master-0 kubenswrapper[4091]: I0313 10:35:18.521871 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-ovn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.539276 master-0 kubenswrapper[4091]: I0313 10:35:18.539187 4091 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t6fzx"] Mar 13 10:35:18.622982 master-0 kubenswrapper[4091]: I0313 10:35:18.622887 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-node-log\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.622982 master-0 kubenswrapper[4091]: I0313 10:35:18.622960 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-bin\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.622982 master-0 kubenswrapper[4091]: I0313 10:35:18.622983 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623375 master-0 kubenswrapper[4091]: I0313 10:35:18.623008 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623375 master-0 kubenswrapper[4091]: I0313 10:35:18.623051 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-netd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623375 master-0 kubenswrapper[4091]: I0313 10:35:18.623067 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-config\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623375 master-0 kubenswrapper[4091]: I0313 10:35:18.623082 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovn-node-metrics-cert\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623375 master-0 kubenswrapper[4091]: I0313 10:35:18.623099 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-kubelet\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623375 master-0 kubenswrapper[4091]: I0313 10:35:18.623116 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-script-lib\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623375 master-0 kubenswrapper[4091]: I0313 10:35:18.623146 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-var-lib-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623375 master-0 kubenswrapper[4091]: I0313 10:35:18.623163 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-etc-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623375 master-0 kubenswrapper[4091]: I0313 10:35:18.623180 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-netns\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623375 master-0 kubenswrapper[4091]: I0313 10:35:18.623280 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623375 master-0 kubenswrapper[4091]: I0313 10:35:18.623310 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-netd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623681 master-0 kubenswrapper[4091]: I0313 10:35:18.623390 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-ovn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623681 master-0 kubenswrapper[4091]: I0313 10:35:18.623446 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-ovn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623736 master-0 kubenswrapper[4091]: I0313 10:35:18.623708 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-kubelet\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623874 master-0 kubenswrapper[4091]: I0313 10:35:18.623825 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-var-lib-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623938 master-0 kubenswrapper[4091]: I0313 10:35:18.623880 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-etc-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.623938 master-0 kubenswrapper[4091]: I0313 10:35:18.623910 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-bin\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624018 master-0 kubenswrapper[4091]: I0313 10:35:18.623938 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-node-log\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624018 master-0 kubenswrapper[4091]: I0313 10:35:18.623967 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624018 master-0 kubenswrapper[4091]: I0313 10:35:18.624004 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624126 master-0 kubenswrapper[4091]: I0313 10:35:18.624024 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-env-overrides\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624126 master-0 kubenswrapper[4091]: I0313 10:35:18.624046 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-log-socket\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624126 master-0 kubenswrapper[4091]: I0313 10:35:18.624096 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624126 master-0 kubenswrapper[4091]: I0313 10:35:18.624106 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-netns\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624280 master-0 kubenswrapper[4091]: I0313 10:35:18.624138 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-log-socket\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624280 master-0 kubenswrapper[4091]: I0313 10:35:18.624206 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-systemd-units\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624280 master-0 kubenswrapper[4091]: I0313 10:35:18.624234 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-slash\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624280 master-0 kubenswrapper[4091]: I0313 10:35:18.624276 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-slash\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624502 master-0 kubenswrapper[4091]: I0313 10:35:18.624312 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-systemd-units\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624502 master-0 kubenswrapper[4091]: I0313 10:35:18.624341 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-systemd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624502 master-0 kubenswrapper[4091]: I0313 10:35:18.624370 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxvqn\" (UniqueName: \"kubernetes.io/projected/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-kube-api-access-vxvqn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624502 master-0 kubenswrapper[4091]: I0313 10:35:18.624379 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-config\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624502 master-0 kubenswrapper[4091]: I0313 10:35:18.624413 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-systemd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624720 master-0 kubenswrapper[4091]: I0313 10:35:18.624582 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-script-lib\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.624720 master-0 kubenswrapper[4091]: I0313 10:35:18.624698 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-env-overrides\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.629233 master-0 kubenswrapper[4091]: I0313 10:35:18.629189 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovn-node-metrics-cert\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.643164 master-0 kubenswrapper[4091]: I0313 10:35:18.643086 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxvqn\" (UniqueName: \"kubernetes.io/projected/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-kube-api-access-vxvqn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:18.812612 master-0 kubenswrapper[4091]: I0313 10:35:18.812507 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:19.132053 master-0 kubenswrapper[4091]: I0313 10:35:19.131676 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mc5nc" event={"ID":"5843b0d4-a538-4261-b425-598e318c9d07","Type":"ContainerStarted","Data":"c944cbb5ce091d2b30a6ba99bdc8978babb9cad154349fab3c6d17e1a725697a"} Mar 13 10:35:19.133771 master-0 kubenswrapper[4091]: I0313 10:35:19.133719 4091 generic.go:334] "Generic (PLEG): container finished" podID="b9624a9a-68dd-4cc1-a0a4-23fe297ceba3" containerID="5a756cbc772c72bcdf3f7b55e67e0c66e077c8bc9496058fd8ad31da12ffe6d7" exitCode=0 Mar 13 10:35:19.134027 master-0 kubenswrapper[4091]: I0313 10:35:19.133805 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" event={"ID":"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3","Type":"ContainerDied","Data":"5a756cbc772c72bcdf3f7b55e67e0c66e077c8bc9496058fd8ad31da12ffe6d7"} Mar 13 10:35:19.134027 master-0 kubenswrapper[4091]: I0313 10:35:19.133884 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" event={"ID":"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3","Type":"ContainerStarted","Data":"1e94f5c752f1fded64a3ee340fb34a998ddce5e3acb0a9a9e83f157fbccc7394"} Mar 13 10:35:19.183985 master-0 kubenswrapper[4091]: I0313 10:35:19.183847 4091 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mc5nc" podStartSLOduration=3.657322875 podStartE2EDuration="50.183813566s" podCreationTimestamp="2026-03-13 10:34:29 +0000 UTC" firstStartedPulling="2026-03-13 10:34:30.023148837 +0000 UTC m=+108.711871299" lastFinishedPulling="2026-03-13 10:35:16.549639528 +0000 UTC m=+155.238361990" observedRunningTime="2026-03-13 10:35:19.150601365 +0000 UTC m=+157.839323847" watchObservedRunningTime="2026-03-13 10:35:19.183813566 +0000 UTC m=+157.872536028" Mar 13 10:35:19.201977 master-0 kubenswrapper[4091]: I0313 10:35:19.201932 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:19.202263 master-0 kubenswrapper[4091]: E0313 10:35:19.202233 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:20.141171 master-0 kubenswrapper[4091]: I0313 10:35:20.141040 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" event={"ID":"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3","Type":"ContainerStarted","Data":"58682cc336ca0aa7bbb629aec26944fb5d0a6c9ea3dff226a7de0aa1fe2fb640"} Mar 13 10:35:20.141171 master-0 kubenswrapper[4091]: I0313 10:35:20.141150 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" event={"ID":"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3","Type":"ContainerStarted","Data":"f9ec1d262ea05c3cd24695fe3a38cd47ac61c6631c3e8af0503c17d0dd7fba65"} Mar 13 10:35:20.141171 master-0 kubenswrapper[4091]: I0313 10:35:20.141175 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" event={"ID":"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3","Type":"ContainerStarted","Data":"19ec38b6bf648e1b9f502c40b6a961773e53af1e82800c0ab7ccb94ffa661fa7"} Mar 13 10:35:20.141171 master-0 kubenswrapper[4091]: I0313 10:35:20.141197 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" event={"ID":"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3","Type":"ContainerStarted","Data":"d0b93f603b86e93f674e0cab0ab5adf1e0c40e840fe40f79c0802b450b3221f6"} Mar 13 10:35:20.142824 master-0 kubenswrapper[4091]: I0313 10:35:20.141218 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" event={"ID":"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3","Type":"ContainerStarted","Data":"ac00412d783845ece2470264e917371e0766975870339dfc7d2856aa015040ca"} Mar 13 10:35:20.142824 master-0 kubenswrapper[4091]: I0313 10:35:20.141245 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" event={"ID":"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3","Type":"ContainerStarted","Data":"539402afbdd28521eeba2e36bb2bfa5fbeb35b4c6dd6b2e02507511a26027398"} Mar 13 10:35:20.203089 master-0 kubenswrapper[4091]: I0313 10:35:20.202945 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:20.203422 master-0 kubenswrapper[4091]: E0313 10:35:20.203152 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:20.208404 master-0 kubenswrapper[4091]: I0313 10:35:20.208318 4091 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95ce747e-1691-4861-aaa2-892ce2eab47b" path="/var/lib/kubelet/pods/95ce747e-1691-4861-aaa2-892ce2eab47b/volumes" Mar 13 10:35:21.202321 master-0 kubenswrapper[4091]: I0313 10:35:21.202161 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:21.203091 master-0 kubenswrapper[4091]: E0313 10:35:21.202344 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:22.152874 master-0 kubenswrapper[4091]: I0313 10:35:22.152773 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" event={"ID":"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3","Type":"ContainerStarted","Data":"9163d43ec4c32fb17f173244c4db32aef16f6496e2c82e4e9360a840fad2cd8c"} Mar 13 10:35:22.202619 master-0 kubenswrapper[4091]: I0313 10:35:22.202497 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:22.203367 master-0 kubenswrapper[4091]: E0313 10:35:22.203307 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:22.395630 master-0 kubenswrapper[4091]: E0313 10:35:22.395466 4091 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:35:23.202930 master-0 kubenswrapper[4091]: I0313 10:35:23.202638 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:23.204736 master-0 kubenswrapper[4091]: E0313 10:35:23.203011 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:23.268661 master-0 kubenswrapper[4091]: I0313 10:35:23.268579 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchrx\" (UniqueName: \"kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx\") pod \"network-check-target-96vwf\" (UID: \"803de28e-3b31-4ea2-9b97-87a733635a5c\") " pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:23.268956 master-0 kubenswrapper[4091]: E0313 10:35:23.268840 4091 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 10:35:23.268956 master-0 kubenswrapper[4091]: E0313 10:35:23.268901 4091 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 10:35:23.268956 master-0 kubenswrapper[4091]: E0313 10:35:23.268918 4091 projected.go:194] Error preparing data for projected volume kube-api-access-gchrx for pod openshift-network-diagnostics/network-check-target-96vwf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:35:23.269083 master-0 kubenswrapper[4091]: E0313 10:35:23.268988 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx podName:803de28e-3b31-4ea2-9b97-87a733635a5c nodeName:}" failed. No retries permitted until 2026-03-13 10:35:55.268968814 +0000 UTC m=+193.957691276 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gchrx" (UniqueName: "kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx") pod "network-check-target-96vwf" (UID: "803de28e-3b31-4ea2-9b97-87a733635a5c") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 10:35:24.166055 master-0 kubenswrapper[4091]: I0313 10:35:24.165835 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" event={"ID":"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3","Type":"ContainerStarted","Data":"7a113769ba9281ee5f08eb93108eff905638417d6069d4dd6b2e73bfffdc7f07"} Mar 13 10:35:24.166628 master-0 kubenswrapper[4091]: I0313 10:35:24.166209 4091 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:24.186329 master-0 kubenswrapper[4091]: I0313 10:35:24.186283 4091 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:24.202791 master-0 kubenswrapper[4091]: I0313 10:35:24.202752 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:24.203135 master-0 kubenswrapper[4091]: E0313 10:35:24.202890 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:24.330484 master-0 kubenswrapper[4091]: I0313 10:35:24.330380 4091 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" podStartSLOduration=6.330355847 podStartE2EDuration="6.330355847s" podCreationTimestamp="2026-03-13 10:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:35:24.329951476 +0000 UTC m=+163.018673958" watchObservedRunningTime="2026-03-13 10:35:24.330355847 +0000 UTC m=+163.019078309" Mar 13 10:35:25.172245 master-0 kubenswrapper[4091]: I0313 10:35:25.172163 4091 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:25.172245 master-0 kubenswrapper[4091]: I0313 10:35:25.172239 4091 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:25.201438 master-0 kubenswrapper[4091]: I0313 10:35:25.201365 4091 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:25.201956 master-0 kubenswrapper[4091]: I0313 10:35:25.201901 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:25.202173 master-0 kubenswrapper[4091]: E0313 10:35:25.202103 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:26.202950 master-0 kubenswrapper[4091]: I0313 10:35:26.202440 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:26.203606 master-0 kubenswrapper[4091]: E0313 10:35:26.203000 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:26.277210 master-0 kubenswrapper[4091]: I0313 10:35:26.277144 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-96vwf"] Mar 13 10:35:26.277457 master-0 kubenswrapper[4091]: I0313 10:35:26.277307 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:26.277457 master-0 kubenswrapper[4091]: E0313 10:35:26.277412 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:26.279795 master-0 kubenswrapper[4091]: I0313 10:35:26.279722 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jz2lp"] Mar 13 10:35:27.182568 master-0 kubenswrapper[4091]: I0313 10:35:27.182497 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:27.182931 master-0 kubenswrapper[4091]: E0313 10:35:27.182662 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:27.396776 master-0 kubenswrapper[4091]: E0313 10:35:27.396580 4091 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 13 10:35:28.203176 master-0 kubenswrapper[4091]: I0313 10:35:28.203019 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:28.203393 master-0 kubenswrapper[4091]: E0313 10:35:28.203195 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:28.203485 master-0 kubenswrapper[4091]: I0313 10:35:28.203448 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:28.203706 master-0 kubenswrapper[4091]: E0313 10:35:28.203656 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:28.985745 master-0 kubenswrapper[4091]: E0313 10:35:28.985635 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" podUID="4aaf36b4-e556-4723-a624-aa2edc69c301" Mar 13 10:35:30.202262 master-0 kubenswrapper[4091]: I0313 10:35:30.202206 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:30.203127 master-0 kubenswrapper[4091]: I0313 10:35:30.202332 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:30.203127 master-0 kubenswrapper[4091]: E0313 10:35:30.202448 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:30.203127 master-0 kubenswrapper[4091]: E0313 10:35:30.202521 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:32.201937 master-0 kubenswrapper[4091]: I0313 10:35:32.201847 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:32.201937 master-0 kubenswrapper[4091]: I0313 10:35:32.201914 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:32.202923 master-0 kubenswrapper[4091]: E0313 10:35:32.202751 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:35:32.202923 master-0 kubenswrapper[4091]: E0313 10:35:32.202871 4091 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-96vwf" podUID="803de28e-3b31-4ea2-9b97-87a733635a5c" Mar 13 10:35:33.968888 master-0 kubenswrapper[4091]: I0313 10:35:33.968757 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:33.969532 master-0 kubenswrapper[4091]: E0313 10:35:33.969003 4091 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:35:33.969532 master-0 kubenswrapper[4091]: E0313 10:35:33.969145 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:37:35.969110426 +0000 UTC m=+294.657832928 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:35:34.202292 master-0 kubenswrapper[4091]: I0313 10:35:34.202165 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:34.202714 master-0 kubenswrapper[4091]: I0313 10:35:34.202551 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:34.205772 master-0 kubenswrapper[4091]: I0313 10:35:34.204931 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 10:35:34.205772 master-0 kubenswrapper[4091]: I0313 10:35:34.205050 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 10:35:34.208003 master-0 kubenswrapper[4091]: I0313 10:35:34.207957 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 10:35:34.474039 master-0 kubenswrapper[4091]: I0313 10:35:34.473950 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:34.474287 master-0 kubenswrapper[4091]: E0313 10:35:34.474187 4091 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 10:35:34.474287 master-0 kubenswrapper[4091]: E0313 10:35:34.474279 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:36:38.474259092 +0000 UTC m=+237.162981574 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : secret "metrics-daemon-secret" not found Mar 13 10:35:36.567216 master-0 kubenswrapper[4091]: I0313 10:35:36.567134 4091 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 13 10:35:36.602442 master-0 kubenswrapper[4091]: I0313 10:35:36.602321 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w"] Mar 13 10:35:36.603200 master-0 kubenswrapper[4091]: I0313 10:35:36.603130 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:36.613128 master-0 kubenswrapper[4091]: W0313 10:35:36.611053 4091 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: secrets "openshift-apiserver-operator-serving-cert" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'master-0' and this object Mar 13 10:35:36.613128 master-0 kubenswrapper[4091]: E0313 10:35:36.611124 4091 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-serving-cert\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 10:35:36.613128 master-0 kubenswrapper[4091]: W0313 10:35:36.611183 4091 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-apiserver-operator-config" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'master-0' and this object Mar 13 10:35:36.613128 master-0 kubenswrapper[4091]: E0313 10:35:36.611200 4091 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-apiserver-operator-config\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 10:35:36.613128 master-0 kubenswrapper[4091]: I0313 10:35:36.611284 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 10:35:36.613128 master-0 kubenswrapper[4091]: I0313 10:35:36.611609 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 10:35:36.613128 master-0 kubenswrapper[4091]: I0313 10:35:36.612880 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc"] Mar 13 10:35:36.613880 master-0 kubenswrapper[4091]: I0313 10:35:36.613640 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:36.614380 master-0 kubenswrapper[4091]: I0313 10:35:36.614309 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh"] Mar 13 10:35:36.619435 master-0 kubenswrapper[4091]: I0313 10:35:36.616276 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:36.619435 master-0 kubenswrapper[4091]: I0313 10:35:36.616691 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw"] Mar 13 10:35:36.619435 master-0 kubenswrapper[4091]: I0313 10:35:36.617072 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2"] Mar 13 10:35:36.619435 master-0 kubenswrapper[4091]: I0313 10:35:36.617211 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.619435 master-0 kubenswrapper[4091]: I0313 10:35:36.618118 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45"] Mar 13 10:35:36.619435 master-0 kubenswrapper[4091]: I0313 10:35:36.618507 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74"] Mar 13 10:35:36.619435 master-0 kubenswrapper[4091]: I0313 10:35:36.618789 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:36.619435 master-0 kubenswrapper[4091]: I0313 10:35:36.618858 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 10:35:36.619435 master-0 kubenswrapper[4091]: I0313 10:35:36.619022 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 10:35:36.619435 master-0 kubenswrapper[4091]: I0313 10:35:36.619302 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 10:35:36.619435 master-0 kubenswrapper[4091]: I0313 10:35:36.619381 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:36.620734 master-0 kubenswrapper[4091]: I0313 10:35:36.619685 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 10:35:36.629380 master-0 kubenswrapper[4091]: I0313 10:35:36.628315 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 10:35:36.629380 master-0 kubenswrapper[4091]: I0313 10:35:36.628913 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 10:35:36.629380 master-0 kubenswrapper[4091]: I0313 10:35:36.629136 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 10:35:36.629380 master-0 kubenswrapper[4091]: I0313 10:35:36.629283 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 10:35:36.631956 master-0 kubenswrapper[4091]: I0313 10:35:36.629532 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 10:35:36.631956 master-0 kubenswrapper[4091]: I0313 10:35:36.629687 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 10:35:36.631956 master-0 kubenswrapper[4091]: I0313 10:35:36.629707 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 10:35:36.631956 master-0 kubenswrapper[4091]: I0313 10:35:36.629829 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 10:35:36.631956 master-0 kubenswrapper[4091]: I0313 10:35:36.629895 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 10:35:36.631956 master-0 kubenswrapper[4091]: I0313 10:35:36.630982 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 10:35:36.631956 master-0 kubenswrapper[4091]: I0313 10:35:36.631002 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 10:35:36.631956 master-0 kubenswrapper[4091]: I0313 10:35:36.631134 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 10:35:36.631956 master-0 kubenswrapper[4091]: I0313 10:35:36.631457 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 10:35:36.631956 master-0 kubenswrapper[4091]: I0313 10:35:36.631571 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 10:35:36.631956 master-0 kubenswrapper[4091]: I0313 10:35:36.631754 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 10:35:36.632343 master-0 kubenswrapper[4091]: I0313 10:35:36.632314 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4"] Mar 13 10:35:36.634548 master-0 kubenswrapper[4091]: I0313 10:35:36.634310 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:36.681737 master-0 kubenswrapper[4091]: I0313 10:35:36.681643 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx"] Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.695210 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.695524 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.695654 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8"] Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.695705 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.695815 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-wjrpm"] Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.696041 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl"] Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.695210 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.696333 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.696254 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh"] Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.696662 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr"] Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.696898 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz"] Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.697019 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.697072 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.697109 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv"] Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.697171 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.697178 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.697224 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.697559 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.697035 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.698801 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.699068 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.699339 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h"] Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.699352 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.699557 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.699638 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-tzd9b"] Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.699764 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.699977 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl"] Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.700130 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.700142 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.700607 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-85x6d"] Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.700716 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9db15a-8854-485b-9863-9cbe5dddd977-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:36.703496 master-0 kubenswrapper[4091]: I0313 10:35:36.700750 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d40b37-0f3d-4531-9fa8-eda965d2337d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.700779 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp6pp\" (UniqueName: \"kubernetes.io/projected/8a305f45-8689-45a8-8c8b-5954f2c863df-kube-api-access-zp6pp\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.700806 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.700824 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.700872 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.700832 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-d787l"] Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.701063 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.700830 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-serving-cert\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.701254 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.701278 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cpdn\" (UniqueName: \"kubernetes.io/projected/c455a959-d764-4b4f-a1e0-95c27495dd9d-kube-api-access-2cpdn\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.701295 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.701317 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5vcv\" (UniqueName: \"kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-kube-api-access-m5vcv\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.701337 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.701353 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.701397 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-config\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.701419 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjcjm\" (UniqueName: \"kubernetes.io/projected/42b4d53c-af72-44c8-9605-271445f95f87-kube-api-access-kjcjm\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.701434 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-config\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:36.704782 master-0 kubenswrapper[4091]: I0313 10:35:36.701449 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqjkf\" (UniqueName: \"kubernetes.io/projected/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-kube-api-access-qqjkf\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701464 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9db15a-8854-485b-9863-9cbe5dddd977-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701484 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701507 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701531 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ed47c57-533f-43e4-88eb-07da29b4878f-serving-cert\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701555 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701612 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6ed47c57-533f-43e4-88eb-07da29b4878f-available-featuregates\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701635 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701655 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjk5l\" (UniqueName: \"kubernetes.io/projected/6ed47c57-533f-43e4-88eb-07da29b4878f-kube-api-access-rjk5l\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701671 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701673 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701693 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-ca\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701713 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701730 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/b8d40b37-0f3d-4531-9fa8-eda965d2337d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701754 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9db15a-8854-485b-9863-9cbe5dddd977-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:36.705291 master-0 kubenswrapper[4091]: I0313 10:35:36.701770 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5rht\" (UniqueName: \"kubernetes.io/projected/b8d40b37-0f3d-4531-9fa8-eda965d2337d-kube-api-access-l5rht\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.701786 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4d5479f3-51ec-4b93-8188-21cdda44828d-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.701813 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grplv\" (UniqueName: \"kubernetes.io/projected/574bf255-14b3-40af-b240-2d3abd5b86b8-kube-api-access-grplv\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.701831 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.701849 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrlx\" (UniqueName: \"kubernetes.io/projected/866cf034-8fd8-4f16-8e9b-68627228aa8d-kube-api-access-mnrlx\") pod \"csi-snapshot-controller-operator-5685fbc7d-mfvmx\" (UID: \"866cf034-8fd8-4f16-8e9b-68627228aa8d\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.701867 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b4d53c-af72-44c8-9605-271445f95f87-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.701895 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.701910 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-config\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.701930 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.701946 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.701965 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-config\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.701982 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-config\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.702052 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c4rc\" (UniqueName: \"kubernetes.io/projected/3ff2ab1c-7057-4e18-8e32-68807f86532a-kube-api-access-8c4rc\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.702105 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6xlb\" (UniqueName: \"kubernetes.io/projected/4d5479f3-51ec-4b93-8188-21cdda44828d-kube-api-access-j6xlb\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.702127 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-serving-cert\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.705784 master-0 kubenswrapper[4091]: I0313 10:35:36.702179 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-client\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.702208 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-serving-cert\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.702235 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22bwx\" (UniqueName: \"kubernetes.io/projected/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-kube-api-access-22bwx\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.702253 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpdlr\" (UniqueName: \"kubernetes.io/projected/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-kube-api-access-lpdlr\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.701558 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv"] Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.701930 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.702658 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w"] Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.702681 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc"] Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.702692 4091 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-gdjjd"] Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.702340 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.702375 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.702407 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.702439 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.702556 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.702608 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.702901 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh"] Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.702947 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.703127 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.703180 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.704921 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.705047 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.705270 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.705482 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 10:35:36.706189 master-0 kubenswrapper[4091]: I0313 10:35:36.705889 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 10:35:36.707931 master-0 kubenswrapper[4091]: I0313 10:35:36.707887 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2"] Mar 13 10:35:36.708010 master-0 kubenswrapper[4091]: I0313 10:35:36.707982 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 10:35:36.708187 master-0 kubenswrapper[4091]: I0313 10:35:36.708164 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 10:35:36.708247 master-0 kubenswrapper[4091]: I0313 10:35:36.708212 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 10:35:36.708283 master-0 kubenswrapper[4091]: I0313 10:35:36.708226 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 10:35:36.708553 master-0 kubenswrapper[4091]: I0313 10:35:36.708482 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 10:35:36.708553 master-0 kubenswrapper[4091]: I0313 10:35:36.708517 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 10:35:36.708553 master-0 kubenswrapper[4091]: I0313 10:35:36.708526 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 10:35:36.708892 master-0 kubenswrapper[4091]: I0313 10:35:36.708665 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 10:35:36.708892 master-0 kubenswrapper[4091]: I0313 10:35:36.708729 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 10:35:36.708892 master-0 kubenswrapper[4091]: I0313 10:35:36.708745 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 10:35:36.708892 master-0 kubenswrapper[4091]: I0313 10:35:36.708774 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 10:35:36.708892 master-0 kubenswrapper[4091]: I0313 10:35:36.708485 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 10:35:36.708892 master-0 kubenswrapper[4091]: I0313 10:35:36.708851 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 10:35:36.708892 master-0 kubenswrapper[4091]: I0313 10:35:36.708891 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 10:35:36.710136 master-0 kubenswrapper[4091]: I0313 10:35:36.708950 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 10:35:36.710136 master-0 kubenswrapper[4091]: I0313 10:35:36.708974 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 10:35:36.710136 master-0 kubenswrapper[4091]: I0313 10:35:36.709014 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 10:35:36.710136 master-0 kubenswrapper[4091]: I0313 10:35:36.708670 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 10:35:36.710136 master-0 kubenswrapper[4091]: I0313 10:35:36.709244 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 10:35:36.710136 master-0 kubenswrapper[4091]: I0313 10:35:36.709289 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 10:35:36.710136 master-0 kubenswrapper[4091]: I0313 10:35:36.709252 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 10:35:36.710136 master-0 kubenswrapper[4091]: I0313 10:35:36.709407 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 10:35:36.710136 master-0 kubenswrapper[4091]: I0313 10:35:36.709991 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 10:35:36.710657 master-0 kubenswrapper[4091]: I0313 10:35:36.710632 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 10:35:36.710846 master-0 kubenswrapper[4091]: I0313 10:35:36.710823 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 10:35:36.710944 master-0 kubenswrapper[4091]: I0313 10:35:36.710910 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 10:35:36.711003 master-0 kubenswrapper[4091]: I0313 10:35:36.710982 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 10:35:36.711147 master-0 kubenswrapper[4091]: I0313 10:35:36.711116 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 10:35:36.711847 master-0 kubenswrapper[4091]: I0313 10:35:36.711814 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 10:35:36.712045 master-0 kubenswrapper[4091]: I0313 10:35:36.712010 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74"] Mar 13 10:35:36.713098 master-0 kubenswrapper[4091]: I0313 10:35:36.713058 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx"] Mar 13 10:35:36.716649 master-0 kubenswrapper[4091]: I0313 10:35:36.716556 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4"] Mar 13 10:35:36.720583 master-0 kubenswrapper[4091]: I0313 10:35:36.719628 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh"] Mar 13 10:35:36.720583 master-0 kubenswrapper[4091]: I0313 10:35:36.720102 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-wjrpm"] Mar 13 10:35:36.726715 master-0 kubenswrapper[4091]: I0313 10:35:36.725192 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-85x6d"] Mar 13 10:35:36.737191 master-0 kubenswrapper[4091]: I0313 10:35:36.737140 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 10:35:36.737647 master-0 kubenswrapper[4091]: I0313 10:35:36.737606 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 10:35:36.737974 master-0 kubenswrapper[4091]: I0313 10:35:36.737950 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 10:35:36.738871 master-0 kubenswrapper[4091]: I0313 10:35:36.738838 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw"] Mar 13 10:35:36.739538 master-0 kubenswrapper[4091]: I0313 10:35:36.739457 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv"] Mar 13 10:35:36.741065 master-0 kubenswrapper[4091]: I0313 10:35:36.741025 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8"] Mar 13 10:35:36.742242 master-0 kubenswrapper[4091]: I0313 10:35:36.742197 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr"] Mar 13 10:35:36.742998 master-0 kubenswrapper[4091]: I0313 10:35:36.742949 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h"] Mar 13 10:35:36.744156 master-0 kubenswrapper[4091]: I0313 10:35:36.744131 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz"] Mar 13 10:35:36.745284 master-0 kubenswrapper[4091]: I0313 10:35:36.745228 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv"] Mar 13 10:35:36.748512 master-0 kubenswrapper[4091]: I0313 10:35:36.748477 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-tzd9b"] Mar 13 10:35:36.749884 master-0 kubenswrapper[4091]: I0313 10:35:36.749836 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-d787l"] Mar 13 10:35:36.751575 master-0 kubenswrapper[4091]: I0313 10:35:36.751539 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl"] Mar 13 10:35:36.753002 master-0 kubenswrapper[4091]: I0313 10:35:36.752972 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl"] Mar 13 10:35:36.754397 master-0 kubenswrapper[4091]: I0313 10:35:36.754368 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45"] Mar 13 10:35:36.802987 master-0 kubenswrapper[4091]: I0313 10:35:36.802781 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4d5479f3-51ec-4b93-8188-21cdda44828d-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:36.802987 master-0 kubenswrapper[4091]: I0313 10:35:36.802850 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec3168fc-6c8f-4603-94e0-17b1ae22a802-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:36.803100 master-0 kubenswrapper[4091]: I0313 10:35:36.803071 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq9dl\" (UniqueName: \"kubernetes.io/projected/b12e76f4-b960-4534-90e6-a2cdbecd1728-kube-api-access-xq9dl\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:36.803135 master-0 kubenswrapper[4091]: I0313 10:35:36.803119 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grplv\" (UniqueName: \"kubernetes.io/projected/574bf255-14b3-40af-b240-2d3abd5b86b8-kube-api-access-grplv\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.803175 master-0 kubenswrapper[4091]: I0313 10:35:36.803148 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec3168fc-6c8f-4603-94e0-17b1ae22a802-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:36.803208 master-0 kubenswrapper[4091]: I0313 10:35:36.803174 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a998af-4fc0-4078-a6a0-93dde6c00508-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:36.803244 master-0 kubenswrapper[4091]: I0313 10:35:36.803223 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.804254 master-0 kubenswrapper[4091]: I0313 10:35:36.803323 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrlx\" (UniqueName: \"kubernetes.io/projected/866cf034-8fd8-4f16-8e9b-68627228aa8d-kube-api-access-mnrlx\") pod \"csi-snapshot-controller-operator-5685fbc7d-mfvmx\" (UID: \"866cf034-8fd8-4f16-8e9b-68627228aa8d\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx" Mar 13 10:35:36.804254 master-0 kubenswrapper[4091]: I0313 10:35:36.803422 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b4d53c-af72-44c8-9605-271445f95f87-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:36.804254 master-0 kubenswrapper[4091]: I0313 10:35:36.803454 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd2mn\" (UniqueName: \"kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-kube-api-access-qd2mn\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:36.804254 master-0 kubenswrapper[4091]: I0313 10:35:36.803495 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d84xk\" (UniqueName: \"kubernetes.io/projected/2afe3890-e844-4dd3-ba49-3ac9178549bf-kube-api-access-d84xk\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:36.804254 master-0 kubenswrapper[4091]: I0313 10:35:36.803535 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:36.804254 master-0 kubenswrapper[4091]: I0313 10:35:36.803565 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-config\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:36.804254 master-0 kubenswrapper[4091]: I0313 10:35:36.803626 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.804254 master-0 kubenswrapper[4091]: I0313 10:35:36.803653 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:36.804254 master-0 kubenswrapper[4091]: I0313 10:35:36.803676 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c4rc\" (UniqueName: \"kubernetes.io/projected/3ff2ab1c-7057-4e18-8e32-68807f86532a-kube-api-access-8c4rc\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:36.804254 master-0 kubenswrapper[4091]: I0313 10:35:36.803813 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.804254 master-0 kubenswrapper[4091]: I0313 10:35:36.803821 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-config\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.804254 master-0 kubenswrapper[4091]: I0313 10:35:36.804179 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-config\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.804254 master-0 kubenswrapper[4091]: I0313 10:35:36.804263 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6xlb\" (UniqueName: \"kubernetes.io/projected/4d5479f3-51ec-4b93-8188-21cdda44828d-kube-api-access-j6xlb\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:36.804667 master-0 kubenswrapper[4091]: I0313 10:35:36.804292 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-serving-cert\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.804667 master-0 kubenswrapper[4091]: I0313 10:35:36.804336 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:36.804667 master-0 kubenswrapper[4091]: I0313 10:35:36.804371 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j59zw\" (UniqueName: \"kubernetes.io/projected/95339220-324d-45e7-bdc2-e4f42fbd1d32-kube-api-access-j59zw\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:36.804667 master-0 kubenswrapper[4091]: I0313 10:35:36.804428 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpdlr\" (UniqueName: \"kubernetes.io/projected/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-kube-api-access-lpdlr\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:36.804667 master-0 kubenswrapper[4091]: I0313 10:35:36.804455 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:36.804667 master-0 kubenswrapper[4091]: I0313 10:35:36.804470 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.804667 master-0 kubenswrapper[4091]: I0313 10:35:36.804510 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b2e803-302b-4650-b18f-d3d2dd703bd5-serving-cert\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:36.804667 master-0 kubenswrapper[4091]: I0313 10:35:36.804516 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b4d53c-af72-44c8-9605-271445f95f87-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:36.804667 master-0 kubenswrapper[4091]: I0313 10:35:36.804529 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-config\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.804667 master-0 kubenswrapper[4091]: I0313 10:35:36.804565 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:36.804667 master-0 kubenswrapper[4091]: I0313 10:35:36.804578 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-client\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.804956 master-0 kubenswrapper[4091]: I0313 10:35:36.804774 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-serving-cert\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:36.804956 master-0 kubenswrapper[4091]: I0313 10:35:36.804800 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22bwx\" (UniqueName: \"kubernetes.io/projected/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-kube-api-access-22bwx\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:36.804956 master-0 kubenswrapper[4091]: I0313 10:35:36.804855 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9db15a-8854-485b-9863-9cbe5dddd977-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:36.804956 master-0 kubenswrapper[4091]: I0313 10:35:36.804884 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-config\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.805297 master-0 kubenswrapper[4091]: I0313 10:35:36.805067 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d40b37-0f3d-4531-9fa8-eda965d2337d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:36.805297 master-0 kubenswrapper[4091]: I0313 10:35:36.805155 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp6pp\" (UniqueName: \"kubernetes.io/projected/8a305f45-8689-45a8-8c8b-5954f2c863df-kube-api-access-zp6pp\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:36.805297 master-0 kubenswrapper[4091]: I0313 10:35:36.805187 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:36.805297 master-0 kubenswrapper[4091]: I0313 10:35:36.805216 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-serving-cert\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.806933 master-0 kubenswrapper[4091]: E0313 10:35:36.805818 4091 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:36.806933 master-0 kubenswrapper[4091]: E0313 10:35:36.805886 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls podName:4d5479f3-51ec-4b93-8188-21cdda44828d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:37.305869851 +0000 UTC m=+175.994592313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-vk9qz" (UID: "4d5479f3-51ec-4b93-8188-21cdda44828d") : secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:36.806933 master-0 kubenswrapper[4091]: I0313 10:35:36.806301 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9db15a-8854-485b-9863-9cbe5dddd977-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:36.806933 master-0 kubenswrapper[4091]: I0313 10:35:36.805276 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a998af-4fc0-4078-a6a0-93dde6c00508-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:36.806933 master-0 kubenswrapper[4091]: I0313 10:35:36.806431 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b12e76f4-b960-4534-90e6-a2cdbecd1728-host-slash\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:36.806933 master-0 kubenswrapper[4091]: I0313 10:35:36.806464 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:36.806933 master-0 kubenswrapper[4091]: I0313 10:35:36.806564 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cpdn\" (UniqueName: \"kubernetes.io/projected/c455a959-d764-4b4f-a1e0-95c27495dd9d-kube-api-access-2cpdn\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:36.806933 master-0 kubenswrapper[4091]: I0313 10:35:36.806632 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:36.806933 master-0 kubenswrapper[4091]: I0313 10:35:36.806666 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:36.806933 master-0 kubenswrapper[4091]: I0313 10:35:36.806727 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b2e803-302b-4650-b18f-d3d2dd703bd5-config\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:36.806933 master-0 kubenswrapper[4091]: E0313 10:35:36.806772 4091 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:36.806933 master-0 kubenswrapper[4091]: I0313 10:35:36.806761 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5vcv\" (UniqueName: \"kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-kube-api-access-m5vcv\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:36.806933 master-0 kubenswrapper[4091]: I0313 10:35:36.806835 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:36.807375 master-0 kubenswrapper[4091]: E0313 10:35:36.806961 4091 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:36.807375 master-0 kubenswrapper[4091]: E0313 10:35:36.807010 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls podName:3ff2ab1c-7057-4e18-8e32-68807f86532a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:37.306994301 +0000 UTC m=+175.995716763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls") pod "dns-operator-589895fbb7-wjrpm" (UID: "3ff2ab1c-7057-4e18-8e32-68807f86532a") : secret "metrics-tls" not found Mar 13 10:35:36.807375 master-0 kubenswrapper[4091]: I0313 10:35:36.807056 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:36.807375 master-0 kubenswrapper[4091]: E0313 10:35:36.807105 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:37.307095754 +0000 UTC m=+175.995818306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:36.807375 master-0 kubenswrapper[4091]: E0313 10:35:36.807161 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 10:35:36.807375 master-0 kubenswrapper[4091]: E0313 10:35:36.807191 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert podName:c455a959-d764-4b4f-a1e0-95c27495dd9d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:37.307183226 +0000 UTC m=+175.995905688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert") pod "catalog-operator-7d9c49f57b-2j5jl" (UID: "c455a959-d764-4b4f-a1e0-95c27495dd9d") : secret "catalog-operator-serving-cert" not found Mar 13 10:35:36.809373 master-0 kubenswrapper[4091]: I0313 10:35:36.809104 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-config\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:36.809373 master-0 kubenswrapper[4091]: I0313 10:35:36.809200 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-config\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:36.809373 master-0 kubenswrapper[4091]: I0313 10:35:36.809273 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjcjm\" (UniqueName: \"kubernetes.io/projected/42b4d53c-af72-44c8-9605-271445f95f87-kube-api-access-kjcjm\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:36.809373 master-0 kubenswrapper[4091]: I0313 10:35:36.809299 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqjkf\" (UniqueName: \"kubernetes.io/projected/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-kube-api-access-qqjkf\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.809373 master-0 kubenswrapper[4091]: I0313 10:35:36.809351 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9db15a-8854-485b-9863-9cbe5dddd977-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:36.809373 master-0 kubenswrapper[4091]: I0313 10:35:36.809378 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:36.809739 master-0 kubenswrapper[4091]: I0313 10:35:36.809430 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:36.809739 master-0 kubenswrapper[4091]: I0313 10:35:36.809460 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7667717b-fb74-456b-8615-16475cb69e98-trusted-ca\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:36.809739 master-0 kubenswrapper[4091]: I0313 10:35:36.809521 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ed47c57-533f-43e4-88eb-07da29b4878f-serving-cert\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:36.809739 master-0 kubenswrapper[4091]: I0313 10:35:36.809550 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:36.809739 master-0 kubenswrapper[4091]: I0313 10:35:36.809635 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:36.809739 master-0 kubenswrapper[4091]: I0313 10:35:36.809712 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6ed47c57-533f-43e4-88eb-07da29b4878f-available-featuregates\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:36.809739 master-0 kubenswrapper[4091]: I0313 10:35:36.809721 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-config\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:36.809976 master-0 kubenswrapper[4091]: I0313 10:35:36.809767 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-bound-sa-token\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:36.809976 master-0 kubenswrapper[4091]: I0313 10:35:36.809807 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:36.809976 master-0 kubenswrapper[4091]: I0313 10:35:36.809864 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b12e76f4-b960-4534-90e6-a2cdbecd1728-iptables-alerter-script\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:36.809976 master-0 kubenswrapper[4091]: I0313 10:35:36.809916 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjk5l\" (UniqueName: \"kubernetes.io/projected/6ed47c57-533f-43e4-88eb-07da29b4878f-kube-api-access-rjk5l\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:36.809976 master-0 kubenswrapper[4091]: I0313 10:35:36.809947 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec3168fc-6c8f-4603-94e0-17b1ae22a802-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:36.810108 master-0 kubenswrapper[4091]: I0313 10:35:36.809970 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-ca\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.810108 master-0 kubenswrapper[4091]: I0313 10:35:36.810019 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.810108 master-0 kubenswrapper[4091]: I0313 10:35:36.810050 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knkb7\" (UniqueName: \"kubernetes.io/projected/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-kube-api-access-knkb7\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:36.810108 master-0 kubenswrapper[4091]: I0313 10:35:36.810101 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:36.810219 master-0 kubenswrapper[4091]: I0313 10:35:36.810129 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:36.810219 master-0 kubenswrapper[4091]: I0313 10:35:36.810177 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/b8d40b37-0f3d-4531-9fa8-eda965d2337d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:36.810219 master-0 kubenswrapper[4091]: I0313 10:35:36.810207 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp2qn\" (UniqueName: \"kubernetes.io/projected/37b2e803-302b-4650-b18f-d3d2dd703bd5-kube-api-access-hp2qn\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:36.810313 master-0 kubenswrapper[4091]: I0313 10:35:36.810264 4091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p29zg\" (UniqueName: \"kubernetes.io/projected/a1a998af-4fc0-4078-a6a0-93dde6c00508-kube-api-access-p29zg\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:36.810313 master-0 kubenswrapper[4091]: I0313 10:35:36.810295 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9db15a-8854-485b-9863-9cbe5dddd977-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:36.810398 master-0 kubenswrapper[4091]: I0313 10:35:36.810349 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5rht\" (UniqueName: \"kubernetes.io/projected/b8d40b37-0f3d-4531-9fa8-eda965d2337d-kube-api-access-l5rht\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:36.812014 master-0 kubenswrapper[4091]: I0313 10:35:36.810453 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6ed47c57-533f-43e4-88eb-07da29b4878f-available-featuregates\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:36.812014 master-0 kubenswrapper[4091]: I0313 10:35:36.811196 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4d5479f3-51ec-4b93-8188-21cdda44828d-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:36.812014 master-0 kubenswrapper[4091]: I0313 10:35:36.811451 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.812014 master-0 kubenswrapper[4091]: I0313 10:35:36.811454 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d40b37-0f3d-4531-9fa8-eda965d2337d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:36.812014 master-0 kubenswrapper[4091]: E0313 10:35:36.811713 4091 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 10:35:36.812014 master-0 kubenswrapper[4091]: E0313 10:35:36.811761 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls podName:8cf9326b-bc23-45c2-82c4-9c08c739ac5a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:37.311746288 +0000 UTC m=+176.000468750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-492v4" (UID: "8cf9326b-bc23-45c2-82c4-9c08c739ac5a") : secret "image-registry-operator-tls" not found Mar 13 10:35:36.812014 master-0 kubenswrapper[4091]: E0313 10:35:36.811795 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 10:35:36.812014 master-0 kubenswrapper[4091]: E0313 10:35:36.811821 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert podName:8a305f45-8689-45a8-8c8b-5954f2c863df nodeName:}" failed. No retries permitted until 2026-03-13 10:35:37.3118148 +0000 UTC m=+176.000537262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-d5b45" (UID: "8a305f45-8689-45a8-8c8b-5954f2c863df") : secret "package-server-manager-serving-cert" not found Mar 13 10:35:36.812014 master-0 kubenswrapper[4091]: I0313 10:35:36.811927 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-ca\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.812014 master-0 kubenswrapper[4091]: I0313 10:35:36.812001 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-serving-cert\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.829056 master-0 kubenswrapper[4091]: E0313 10:35:36.812087 4091 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 10:35:36.829056 master-0 kubenswrapper[4091]: E0313 10:35:36.812136 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:37.312120928 +0000 UTC m=+176.000843480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "node-tuning-operator-tls" not found Mar 13 10:35:36.829056 master-0 kubenswrapper[4091]: I0313 10:35:36.812091 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/b8d40b37-0f3d-4531-9fa8-eda965d2337d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:36.829056 master-0 kubenswrapper[4091]: I0313 10:35:36.812264 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-client\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.829056 master-0 kubenswrapper[4091]: I0313 10:35:36.812273 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-config\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:36.829056 master-0 kubenswrapper[4091]: I0313 10:35:36.812867 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-serving-cert\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.829056 master-0 kubenswrapper[4091]: I0313 10:35:36.814153 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ed47c57-533f-43e4-88eb-07da29b4878f-serving-cert\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:36.829056 master-0 kubenswrapper[4091]: I0313 10:35:36.814280 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-serving-cert\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:36.829056 master-0 kubenswrapper[4091]: I0313 10:35:36.814939 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9db15a-8854-485b-9863-9cbe5dddd977-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:36.835223 master-0 kubenswrapper[4091]: I0313 10:35:36.835172 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:36.836437 master-0 kubenswrapper[4091]: I0313 10:35:36.836306 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpdlr\" (UniqueName: \"kubernetes.io/projected/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-kube-api-access-lpdlr\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:36.838481 master-0 kubenswrapper[4091]: I0313 10:35:36.838431 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp6pp\" (UniqueName: \"kubernetes.io/projected/8a305f45-8689-45a8-8c8b-5954f2c863df-kube-api-access-zp6pp\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:36.838913 master-0 kubenswrapper[4091]: I0313 10:35:36.838877 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grplv\" (UniqueName: \"kubernetes.io/projected/574bf255-14b3-40af-b240-2d3abd5b86b8-kube-api-access-grplv\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:36.839250 master-0 kubenswrapper[4091]: I0313 10:35:36.839221 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:36.839810 master-0 kubenswrapper[4091]: I0313 10:35:36.839785 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6xlb\" (UniqueName: \"kubernetes.io/projected/4d5479f3-51ec-4b93-8188-21cdda44828d-kube-api-access-j6xlb\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:36.840114 master-0 kubenswrapper[4091]: I0313 10:35:36.840089 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:36.840349 master-0 kubenswrapper[4091]: I0313 10:35:36.840321 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrlx\" (UniqueName: \"kubernetes.io/projected/866cf034-8fd8-4f16-8e9b-68627228aa8d-kube-api-access-mnrlx\") pod \"csi-snapshot-controller-operator-5685fbc7d-mfvmx\" (UID: \"866cf034-8fd8-4f16-8e9b-68627228aa8d\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx" Mar 13 10:35:36.840743 master-0 kubenswrapper[4091]: I0313 10:35:36.840705 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cpdn\" (UniqueName: \"kubernetes.io/projected/c455a959-d764-4b4f-a1e0-95c27495dd9d-kube-api-access-2cpdn\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:36.840798 master-0 kubenswrapper[4091]: I0313 10:35:36.840784 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqjkf\" (UniqueName: \"kubernetes.io/projected/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-kube-api-access-qqjkf\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:36.841284 master-0 kubenswrapper[4091]: I0313 10:35:36.841249 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjcjm\" (UniqueName: \"kubernetes.io/projected/42b4d53c-af72-44c8-9605-271445f95f87-kube-api-access-kjcjm\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:36.841470 master-0 kubenswrapper[4091]: I0313 10:35:36.841401 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5vcv\" (UniqueName: \"kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-kube-api-access-m5vcv\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:36.841674 master-0 kubenswrapper[4091]: I0313 10:35:36.841640 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c4rc\" (UniqueName: \"kubernetes.io/projected/3ff2ab1c-7057-4e18-8e32-68807f86532a-kube-api-access-8c4rc\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:36.842920 master-0 kubenswrapper[4091]: I0313 10:35:36.842871 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5rht\" (UniqueName: \"kubernetes.io/projected/b8d40b37-0f3d-4531-9fa8-eda965d2337d-kube-api-access-l5rht\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:36.843908 master-0 kubenswrapper[4091]: I0313 10:35:36.843840 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjk5l\" (UniqueName: \"kubernetes.io/projected/6ed47c57-533f-43e4-88eb-07da29b4878f-kube-api-access-rjk5l\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:36.849282 master-0 kubenswrapper[4091]: I0313 10:35:36.849234 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9db15a-8854-485b-9863-9cbe5dddd977-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:36.849939 master-0 kubenswrapper[4091]: I0313 10:35:36.849884 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22bwx\" (UniqueName: \"kubernetes.io/projected/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-kube-api-access-22bwx\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:36.883023 master-0 kubenswrapper[4091]: I0313 10:35:36.882970 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:36.892264 master-0 kubenswrapper[4091]: I0313 10:35:36.892222 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:36.911328 master-0 kubenswrapper[4091]: I0313 10:35:36.911211 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-bound-sa-token\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:36.911579 master-0 kubenswrapper[4091]: I0313 10:35:36.911350 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:36.911579 master-0 kubenswrapper[4091]: I0313 10:35:36.911432 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b12e76f4-b960-4534-90e6-a2cdbecd1728-iptables-alerter-script\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:36.911579 master-0 kubenswrapper[4091]: I0313 10:35:36.911459 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec3168fc-6c8f-4603-94e0-17b1ae22a802-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:36.911723 master-0 kubenswrapper[4091]: I0313 10:35:36.911612 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knkb7\" (UniqueName: \"kubernetes.io/projected/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-kube-api-access-knkb7\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:36.912164 master-0 kubenswrapper[4091]: I0313 10:35:36.912112 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:36.912250 master-0 kubenswrapper[4091]: I0313 10:35:36.912198 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp2qn\" (UniqueName: \"kubernetes.io/projected/37b2e803-302b-4650-b18f-d3d2dd703bd5-kube-api-access-hp2qn\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:36.912250 master-0 kubenswrapper[4091]: I0313 10:35:36.912236 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p29zg\" (UniqueName: \"kubernetes.io/projected/a1a998af-4fc0-4078-a6a0-93dde6c00508-kube-api-access-p29zg\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:36.912332 master-0 kubenswrapper[4091]: I0313 10:35:36.912314 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec3168fc-6c8f-4603-94e0-17b1ae22a802-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: E0313 10:35:36.912481 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: E0313 10:35:36.912552 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert podName:2afe3890-e844-4dd3-ba49-3ac9178549bf nodeName:}" failed. No retries permitted until 2026-03-13 10:35:37.412530353 +0000 UTC m=+176.101252895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert") pod "olm-operator-d64cfc9db-rsl2h" (UID: "2afe3890-e844-4dd3-ba49-3ac9178549bf") : secret "olm-operator-serving-cert" not found Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: I0313 10:35:36.912577 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec3168fc-6c8f-4603-94e0-17b1ae22a802-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: I0313 10:35:36.912625 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a998af-4fc0-4078-a6a0-93dde6c00508-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: I0313 10:35:36.912650 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq9dl\" (UniqueName: \"kubernetes.io/projected/b12e76f4-b960-4534-90e6-a2cdbecd1728-kube-api-access-xq9dl\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: I0313 10:35:36.912680 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd2mn\" (UniqueName: \"kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-kube-api-access-qd2mn\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: I0313 10:35:36.912728 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d84xk\" (UniqueName: \"kubernetes.io/projected/2afe3890-e844-4dd3-ba49-3ac9178549bf-kube-api-access-d84xk\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: I0313 10:35:36.912819 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: I0313 10:35:36.912847 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j59zw\" (UniqueName: \"kubernetes.io/projected/95339220-324d-45e7-bdc2-e4f42fbd1d32-kube-api-access-j59zw\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: I0313 10:35:36.912898 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: E0313 10:35:36.912963 4091 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: I0313 10:35:36.912980 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b2e803-302b-4650-b18f-d3d2dd703bd5-serving-cert\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: E0313 10:35:36.912994 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics podName:66f49a19-0e3b-4611-b8a6-5f5687fa20b6 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:37.412982945 +0000 UTC m=+176.101705407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-85x6d" (UID: "66f49a19-0e3b-4611-b8a6-5f5687fa20b6") : secret "marketplace-operator-metrics" not found Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: I0313 10:35:36.913336 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a998af-4fc0-4078-a6a0-93dde6c00508-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:36.914929 master-0 kubenswrapper[4091]: E0313 10:35:36.913104 4091 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 10:35:36.917615 master-0 kubenswrapper[4091]: I0313 10:35:36.913552 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec3168fc-6c8f-4603-94e0-17b1ae22a802-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:36.917615 master-0 kubenswrapper[4091]: E0313 10:35:36.913615 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs podName:95339220-324d-45e7-bdc2-e4f42fbd1d32 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:37.41355438 +0000 UTC m=+176.102276902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs") pod "multus-admission-controller-8d675b596-d787l" (UID: "95339220-324d-45e7-bdc2-e4f42fbd1d32") : secret "multus-admission-controller-secret" not found Mar 13 10:35:36.917615 master-0 kubenswrapper[4091]: I0313 10:35:36.913774 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b12e76f4-b960-4534-90e6-a2cdbecd1728-host-slash\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:36.917615 master-0 kubenswrapper[4091]: I0313 10:35:36.913817 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b12e76f4-b960-4534-90e6-a2cdbecd1728-iptables-alerter-script\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:36.917615 master-0 kubenswrapper[4091]: I0313 10:35:36.913366 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b12e76f4-b960-4534-90e6-a2cdbecd1728-host-slash\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:36.917615 master-0 kubenswrapper[4091]: I0313 10:35:36.913918 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:36.917615 master-0 kubenswrapper[4091]: I0313 10:35:36.913973 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b2e803-302b-4650-b18f-d3d2dd703bd5-config\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:36.917615 master-0 kubenswrapper[4091]: I0313 10:35:36.914023 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7667717b-fb74-456b-8615-16475cb69e98-trusted-ca\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:36.917615 master-0 kubenswrapper[4091]: E0313 10:35:36.914177 4091 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:36.917615 master-0 kubenswrapper[4091]: E0313 10:35:36.914222 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls podName:7667717b-fb74-456b-8615-16475cb69e98 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:37.414208088 +0000 UTC m=+176.102930620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls") pod "ingress-operator-677db989d6-tzd9b" (UID: "7667717b-fb74-456b-8615-16475cb69e98") : secret "metrics-tls" not found Mar 13 10:35:36.917615 master-0 kubenswrapper[4091]: I0313 10:35:36.916021 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b2e803-302b-4650-b18f-d3d2dd703bd5-config\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:36.917615 master-0 kubenswrapper[4091]: I0313 10:35:36.916465 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a998af-4fc0-4078-a6a0-93dde6c00508-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:36.917615 master-0 kubenswrapper[4091]: I0313 10:35:36.917079 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:36.918109 master-0 kubenswrapper[4091]: I0313 10:35:36.917661 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7667717b-fb74-456b-8615-16475cb69e98-trusted-ca\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:36.918109 master-0 kubenswrapper[4091]: I0313 10:35:36.918039 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec3168fc-6c8f-4603-94e0-17b1ae22a802-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:36.919347 master-0 kubenswrapper[4091]: I0313 10:35:36.919284 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a998af-4fc0-4078-a6a0-93dde6c00508-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:36.922724 master-0 kubenswrapper[4091]: I0313 10:35:36.920868 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b2e803-302b-4650-b18f-d3d2dd703bd5-serving-cert\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:36.950032 master-0 kubenswrapper[4091]: I0313 10:35:36.949958 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knkb7\" (UniqueName: \"kubernetes.io/projected/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-kube-api-access-knkb7\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:36.962919 master-0 kubenswrapper[4091]: I0313 10:35:36.962866 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p29zg\" (UniqueName: \"kubernetes.io/projected/a1a998af-4fc0-4078-a6a0-93dde6c00508-kube-api-access-p29zg\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:36.989623 master-0 kubenswrapper[4091]: I0313 10:35:36.989570 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec3168fc-6c8f-4603-94e0-17b1ae22a802-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:37.000641 master-0 kubenswrapper[4091]: I0313 10:35:37.000024 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-bound-sa-token\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:37.029611 master-0 kubenswrapper[4091]: I0313 10:35:37.023939 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:37.043287 master-0 kubenswrapper[4091]: I0313 10:35:37.041815 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:37.047789 master-0 kubenswrapper[4091]: I0313 10:35:37.047745 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:37.050306 master-0 kubenswrapper[4091]: I0313 10:35:37.050257 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp2qn\" (UniqueName: \"kubernetes.io/projected/37b2e803-302b-4650-b18f-d3d2dd703bd5-kube-api-access-hp2qn\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:37.050365 master-0 kubenswrapper[4091]: I0313 10:35:37.050339 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j59zw\" (UniqueName: \"kubernetes.io/projected/95339220-324d-45e7-bdc2-e4f42fbd1d32-kube-api-access-j59zw\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:37.063689 master-0 kubenswrapper[4091]: I0313 10:35:37.063632 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:37.064535 master-0 kubenswrapper[4091]: I0313 10:35:37.064504 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d84xk\" (UniqueName: \"kubernetes.io/projected/2afe3890-e844-4dd3-ba49-3ac9178549bf-kube-api-access-d84xk\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:37.082000 master-0 kubenswrapper[4091]: I0313 10:35:37.080473 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx" Mar 13 10:35:37.090705 master-0 kubenswrapper[4091]: I0313 10:35:37.088883 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd2mn\" (UniqueName: \"kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-kube-api-access-qd2mn\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:37.093913 master-0 kubenswrapper[4091]: I0313 10:35:37.093394 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8"] Mar 13 10:35:37.106085 master-0 kubenswrapper[4091]: I0313 10:35:37.106027 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq9dl\" (UniqueName: \"kubernetes.io/projected/b12e76f4-b960-4534-90e6-a2cdbecd1728-kube-api-access-xq9dl\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:37.113662 master-0 kubenswrapper[4091]: I0313 10:35:37.113601 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:37.128068 master-0 kubenswrapper[4091]: I0313 10:35:37.128027 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh"] Mar 13 10:35:37.211502 master-0 kubenswrapper[4091]: I0313 10:35:37.208447 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:37.237876 master-0 kubenswrapper[4091]: I0313 10:35:37.237806 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" event={"ID":"8f9db15a-8854-485b-9863-9cbe5dddd977","Type":"ContainerStarted","Data":"c3ef257d3865e4ef11b927a21a93e51aafc4c9ebd98baa7d651806b2a01e30df"} Mar 13 10:35:37.240660 master-0 kubenswrapper[4091]: I0313 10:35:37.240573 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" event={"ID":"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba","Type":"ContainerStarted","Data":"40bc8729edbc545950cfd4248f291a2938cf20232c66e767905dda5ad583859c"} Mar 13 10:35:37.253885 master-0 kubenswrapper[4091]: I0313 10:35:37.253821 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:37.262924 master-0 kubenswrapper[4091]: I0313 10:35:37.262855 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh"] Mar 13 10:35:37.280796 master-0 kubenswrapper[4091]: I0313 10:35:37.280660 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:37.290096 master-0 kubenswrapper[4091]: I0313 10:35:37.288127 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:37.317842 master-0 kubenswrapper[4091]: I0313 10:35:37.317277 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw"] Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: I0313 10:35:37.330267 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: I0313 10:35:37.330340 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: E0313 10:35:37.330478 4091 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: E0313 10:35:37.330521 4091 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: E0313 10:35:37.330534 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:38.33051454 +0000 UTC m=+177.019237002 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: I0313 10:35:37.330707 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: I0313 10:35:37.330779 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: I0313 10:35:37.330850 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: I0313 10:35:37.330908 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: I0313 10:35:37.330985 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: E0313 10:35:37.331325 4091 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: E0313 10:35:37.331367 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls podName:8cf9326b-bc23-45c2-82c4-9c08c739ac5a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:38.331349662 +0000 UTC m=+177.020072164 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-492v4" (UID: "8cf9326b-bc23-45c2-82c4-9c08c739ac5a") : secret "image-registry-operator-tls" not found Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: E0313 10:35:37.331448 4091 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: E0313 10:35:37.332360 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 10:35:37.333214 master-0 kubenswrapper[4091]: E0313 10:35:37.332413 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert podName:c455a959-d764-4b4f-a1e0-95c27495dd9d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:38.3323974 +0000 UTC m=+177.021119862 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert") pod "catalog-operator-7d9c49f57b-2j5jl" (UID: "c455a959-d764-4b4f-a1e0-95c27495dd9d") : secret "catalog-operator-serving-cert" not found Mar 13 10:35:37.341162 master-0 kubenswrapper[4091]: E0313 10:35:37.332436 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls podName:4d5479f3-51ec-4b93-8188-21cdda44828d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:38.332427571 +0000 UTC m=+177.021150023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-vk9qz" (UID: "4d5479f3-51ec-4b93-8188-21cdda44828d") : secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:37.341162 master-0 kubenswrapper[4091]: E0313 10:35:37.332543 4091 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 10:35:37.341162 master-0 kubenswrapper[4091]: E0313 10:35:37.332569 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:38.332562425 +0000 UTC m=+177.021284887 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "node-tuning-operator-tls" not found Mar 13 10:35:37.341162 master-0 kubenswrapper[4091]: E0313 10:35:37.332634 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 10:35:37.341162 master-0 kubenswrapper[4091]: E0313 10:35:37.332705 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert podName:8a305f45-8689-45a8-8c8b-5954f2c863df nodeName:}" failed. No retries permitted until 2026-03-13 10:35:38.332691988 +0000 UTC m=+177.021414490 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-d5b45" (UID: "8a305f45-8689-45a8-8c8b-5954f2c863df") : secret "package-server-manager-serving-cert" not found Mar 13 10:35:37.341162 master-0 kubenswrapper[4091]: E0313 10:35:37.332810 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls podName:3ff2ab1c-7057-4e18-8e32-68807f86532a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:38.33277214 +0000 UTC m=+177.021494652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls") pod "dns-operator-589895fbb7-wjrpm" (UID: "3ff2ab1c-7057-4e18-8e32-68807f86532a") : secret "metrics-tls" not found Mar 13 10:35:37.353749 master-0 kubenswrapper[4091]: I0313 10:35:37.353116 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74"] Mar 13 10:35:37.365984 master-0 kubenswrapper[4091]: W0313 10:35:37.365880 4091 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod282bc9ff_1bc0_421b_9cd3_d88d7c5e5303.slice/crio-f2d2633be257a3a88aea0a33608fb20bbf5d7f015d883bb5bf430f64888b7d47 WatchSource:0}: Error finding container f2d2633be257a3a88aea0a33608fb20bbf5d7f015d883bb5bf430f64888b7d47: Status 404 returned error can't find the container with id f2d2633be257a3a88aea0a33608fb20bbf5d7f015d883bb5bf430f64888b7d47 Mar 13 10:35:37.388974 master-0 kubenswrapper[4091]: I0313 10:35:37.388911 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx"] Mar 13 10:35:37.402476 master-0 kubenswrapper[4091]: I0313 10:35:37.402387 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2"] Mar 13 10:35:37.422372 master-0 kubenswrapper[4091]: I0313 10:35:37.422060 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr"] Mar 13 10:35:37.432153 master-0 kubenswrapper[4091]: I0313 10:35:37.432108 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:37.432153 master-0 kubenswrapper[4091]: I0313 10:35:37.432154 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:37.432271 master-0 kubenswrapper[4091]: I0313 10:35:37.432198 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:37.432271 master-0 kubenswrapper[4091]: I0313 10:35:37.432269 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:37.432602 master-0 kubenswrapper[4091]: E0313 10:35:37.432544 4091 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:37.432689 master-0 kubenswrapper[4091]: E0313 10:35:37.432661 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls podName:7667717b-fb74-456b-8615-16475cb69e98 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:38.43263634 +0000 UTC m=+177.121358872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls") pod "ingress-operator-677db989d6-tzd9b" (UID: "7667717b-fb74-456b-8615-16475cb69e98") : secret "metrics-tls" not found Mar 13 10:35:37.432689 master-0 kubenswrapper[4091]: E0313 10:35:37.432678 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 10:35:37.432850 master-0 kubenswrapper[4091]: E0313 10:35:37.432709 4091 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 10:35:37.432850 master-0 kubenswrapper[4091]: E0313 10:35:37.432736 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert podName:2afe3890-e844-4dd3-ba49-3ac9178549bf nodeName:}" failed. No retries permitted until 2026-03-13 10:35:38.432718152 +0000 UTC m=+177.121440614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert") pod "olm-operator-d64cfc9db-rsl2h" (UID: "2afe3890-e844-4dd3-ba49-3ac9178549bf") : secret "olm-operator-serving-cert" not found Mar 13 10:35:37.432850 master-0 kubenswrapper[4091]: E0313 10:35:37.432763 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics podName:66f49a19-0e3b-4611-b8a6-5f5687fa20b6 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:38.432752183 +0000 UTC m=+177.121474645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-85x6d" (UID: "66f49a19-0e3b-4611-b8a6-5f5687fa20b6") : secret "marketplace-operator-metrics" not found Mar 13 10:35:37.432850 master-0 kubenswrapper[4091]: E0313 10:35:37.432798 4091 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 10:35:37.432850 master-0 kubenswrapper[4091]: E0313 10:35:37.432828 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs podName:95339220-324d-45e7-bdc2-e4f42fbd1d32 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:38.432819005 +0000 UTC m=+177.121541587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs") pod "multus-admission-controller-8d675b596-d787l" (UID: "95339220-324d-45e7-bdc2-e4f42fbd1d32") : secret "multus-admission-controller-secret" not found Mar 13 10:35:37.471466 master-0 kubenswrapper[4091]: I0313 10:35:37.471408 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv"] Mar 13 10:35:37.479410 master-0 kubenswrapper[4091]: W0313 10:35:37.479345 4091 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a998af_4fc0_4078_a6a0_93dde6c00508.slice/crio-981440e84066752679558f1f2c3a39bee9a4847d1a094c571e2638b8a5f2290d WatchSource:0}: Error finding container 981440e84066752679558f1f2c3a39bee9a4847d1a094c571e2638b8a5f2290d: Status 404 returned error can't find the container with id 981440e84066752679558f1f2c3a39bee9a4847d1a094c571e2638b8a5f2290d Mar 13 10:35:37.493993 master-0 kubenswrapper[4091]: I0313 10:35:37.493954 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl"] Mar 13 10:35:37.501344 master-0 kubenswrapper[4091]: W0313 10:35:37.501287 4091 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec3168fc_6c8f_4603_94e0_17b1ae22a802.slice/crio-8f43bd68b145b0d4f8b86d52ece37d2ddf197260fbbf0dee345fc0c4e0be32ff WatchSource:0}: Error finding container 8f43bd68b145b0d4f8b86d52ece37d2ddf197260fbbf0dee345fc0c4e0be32ff: Status 404 returned error can't find the container with id 8f43bd68b145b0d4f8b86d52ece37d2ddf197260fbbf0dee345fc0c4e0be32ff Mar 13 10:35:37.549003 master-0 kubenswrapper[4091]: I0313 10:35:37.548948 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv"] Mar 13 10:35:37.789839 master-0 kubenswrapper[4091]: I0313 10:35:37.789642 4091 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 10:35:37.804830 master-0 kubenswrapper[4091]: E0313 10:35:37.804632 4091 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:35:37.804830 master-0 kubenswrapper[4091]: E0313 10:35:37.804786 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-config podName:5ed5e77b-948b-4d94-ac9f-440ee3c07e18 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:38.304760746 +0000 UTC m=+176.993483208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-config") pod "openshift-apiserver-operator-799b6db4d7-sdg4w" (UID: "5ed5e77b-948b-4d94-ac9f-440ee3c07e18") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:35:37.809995 master-0 kubenswrapper[4091]: I0313 10:35:37.809923 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:37.938085 master-0 kubenswrapper[4091]: I0313 10:35:37.938031 4091 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 10:35:38.246852 master-0 kubenswrapper[4091]: I0313 10:35:38.246773 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" event={"ID":"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba","Type":"ContainerStarted","Data":"2c461d42e265a3320bcaee208db9040eedffe39900d9e8aa36490e00a5c604c0"} Mar 13 10:35:38.248662 master-0 kubenswrapper[4091]: I0313 10:35:38.248624 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" event={"ID":"574bf255-14b3-40af-b240-2d3abd5b86b8","Type":"ContainerStarted","Data":"8f00c30651131dc152cafde2dc4f58ee3081e7ee0af524ba7783523529e49fba"} Mar 13 10:35:38.249833 master-0 kubenswrapper[4091]: I0313 10:35:38.249787 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" event={"ID":"b8d40b37-0f3d-4531-9fa8-eda965d2337d","Type":"ContainerStarted","Data":"04e2d5b4e65ad4d6e19280743e00933d366bcbdfdc3c5d7c64aba41673f1a662"} Mar 13 10:35:38.250792 master-0 kubenswrapper[4091]: I0313 10:35:38.250755 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" event={"ID":"37b2e803-302b-4650-b18f-d3d2dd703bd5","Type":"ContainerStarted","Data":"d32de03a3a8ddd97f3e65197fb212f2ef727ae3a334417e63b4520866c016ec6"} Mar 13 10:35:38.251985 master-0 kubenswrapper[4091]: I0313 10:35:38.251929 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" event={"ID":"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303","Type":"ContainerStarted","Data":"f2d2633be257a3a88aea0a33608fb20bbf5d7f015d883bb5bf430f64888b7d47"} Mar 13 10:35:38.253084 master-0 kubenswrapper[4091]: I0313 10:35:38.253052 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" event={"ID":"1434c4a2-5c4d-478a-a16a-7d6a52ea3099","Type":"ContainerStarted","Data":"cfcbc3062b54d8acdba8fb18315546ff9d2740da776054d7f9430b71a5238353"} Mar 13 10:35:38.254113 master-0 kubenswrapper[4091]: I0313 10:35:38.254072 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-gdjjd" event={"ID":"b12e76f4-b960-4534-90e6-a2cdbecd1728","Type":"ContainerStarted","Data":"3ccf2b15838415a4be52f403df345301db18d66d37f6fa09df717882bb3b0fda"} Mar 13 10:35:38.255162 master-0 kubenswrapper[4091]: I0313 10:35:38.255131 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" event={"ID":"6ed47c57-533f-43e4-88eb-07da29b4878f","Type":"ContainerStarted","Data":"6ae68534d60ba95b9a0cc4c1bb4a76e1716cb7e67493f9e7d66360f5bc7a13b3"} Mar 13 10:35:38.256355 master-0 kubenswrapper[4091]: I0313 10:35:38.256309 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" event={"ID":"ec3168fc-6c8f-4603-94e0-17b1ae22a802","Type":"ContainerStarted","Data":"8f43bd68b145b0d4f8b86d52ece37d2ddf197260fbbf0dee345fc0c4e0be32ff"} Mar 13 10:35:38.257744 master-0 kubenswrapper[4091]: I0313 10:35:38.257711 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" event={"ID":"a1a998af-4fc0-4078-a6a0-93dde6c00508","Type":"ContainerStarted","Data":"981440e84066752679558f1f2c3a39bee9a4847d1a094c571e2638b8a5f2290d"} Mar 13 10:35:38.258844 master-0 kubenswrapper[4091]: I0313 10:35:38.258815 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx" event={"ID":"866cf034-8fd8-4f16-8e9b-68627228aa8d","Type":"ContainerStarted","Data":"186ea687b2b873b969d378ad858dd467c244f19248903ea4dfa2320cfbb636aa"} Mar 13 10:35:38.342451 master-0 kubenswrapper[4091]: I0313 10:35:38.342371 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-config\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:38.343032 master-0 kubenswrapper[4091]: I0313 10:35:38.342978 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:38.343093 master-0 kubenswrapper[4091]: I0313 10:35:38.343066 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:38.343158 master-0 kubenswrapper[4091]: I0313 10:35:38.343093 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:38.343158 master-0 kubenswrapper[4091]: I0313 10:35:38.343140 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:38.343216 master-0 kubenswrapper[4091]: I0313 10:35:38.343190 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:38.343636 master-0 kubenswrapper[4091]: I0313 10:35:38.343225 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:38.343701 master-0 kubenswrapper[4091]: I0313 10:35:38.343667 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:38.343765 master-0 kubenswrapper[4091]: E0313 10:35:38.343491 4091 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:38.343799 master-0 kubenswrapper[4091]: E0313 10:35:38.343777 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 10:35:38.343833 master-0 kubenswrapper[4091]: E0313 10:35:38.343818 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls podName:3ff2ab1c-7057-4e18-8e32-68807f86532a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:40.343797622 +0000 UTC m=+179.032520084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls") pod "dns-operator-589895fbb7-wjrpm" (UID: "3ff2ab1c-7057-4e18-8e32-68807f86532a") : secret "metrics-tls" not found Mar 13 10:35:38.343869 master-0 kubenswrapper[4091]: E0313 10:35:38.343841 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert podName:c455a959-d764-4b4f-a1e0-95c27495dd9d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:40.343832113 +0000 UTC m=+179.032554655 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert") pod "catalog-operator-7d9c49f57b-2j5jl" (UID: "c455a959-d764-4b4f-a1e0-95c27495dd9d") : secret "catalog-operator-serving-cert" not found Mar 13 10:35:38.343869 master-0 kubenswrapper[4091]: I0313 10:35:38.343396 4091 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-config\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:38.343930 master-0 kubenswrapper[4091]: E0313 10:35:38.343681 4091 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:38.343962 master-0 kubenswrapper[4091]: E0313 10:35:38.343936 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls podName:4d5479f3-51ec-4b93-8188-21cdda44828d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:40.343929755 +0000 UTC m=+179.032652217 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-vk9qz" (UID: "4d5479f3-51ec-4b93-8188-21cdda44828d") : secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:38.343962 master-0 kubenswrapper[4091]: E0313 10:35:38.343710 4091 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:38.344016 master-0 kubenswrapper[4091]: E0313 10:35:38.343971 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:40.343962986 +0000 UTC m=+179.032685538 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:38.344016 master-0 kubenswrapper[4091]: E0313 10:35:38.343975 4091 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 10:35:38.344079 master-0 kubenswrapper[4091]: E0313 10:35:38.344059 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:40.344034398 +0000 UTC m=+179.032756900 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "node-tuning-operator-tls" not found Mar 13 10:35:38.344079 master-0 kubenswrapper[4091]: E0313 10:35:38.344075 4091 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 10:35:38.344141 master-0 kubenswrapper[4091]: E0313 10:35:38.343747 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 10:35:38.344141 master-0 kubenswrapper[4091]: E0313 10:35:38.344108 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls podName:8cf9326b-bc23-45c2-82c4-9c08c739ac5a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:40.34410006 +0000 UTC m=+179.032822602 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-492v4" (UID: "8cf9326b-bc23-45c2-82c4-9c08c739ac5a") : secret "image-registry-operator-tls" not found Mar 13 10:35:38.344141 master-0 kubenswrapper[4091]: E0313 10:35:38.344131 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert podName:8a305f45-8689-45a8-8c8b-5954f2c863df nodeName:}" failed. No retries permitted until 2026-03-13 10:35:40.34412092 +0000 UTC m=+179.032843462 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-d5b45" (UID: "8a305f45-8689-45a8-8c8b-5954f2c863df") : secret "package-server-manager-serving-cert" not found Mar 13 10:35:38.426859 master-0 kubenswrapper[4091]: I0313 10:35:38.426795 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:38.452268 master-0 kubenswrapper[4091]: I0313 10:35:38.452195 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:38.452499 master-0 kubenswrapper[4091]: I0313 10:35:38.452416 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:38.452549 master-0 kubenswrapper[4091]: I0313 10:35:38.452522 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:38.452633 master-0 kubenswrapper[4091]: I0313 10:35:38.452606 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:38.452857 master-0 kubenswrapper[4091]: E0313 10:35:38.452812 4091 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 10:35:38.452927 master-0 kubenswrapper[4091]: E0313 10:35:38.452904 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics podName:66f49a19-0e3b-4611-b8a6-5f5687fa20b6 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:40.452882859 +0000 UTC m=+179.141605321 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-85x6d" (UID: "66f49a19-0e3b-4611-b8a6-5f5687fa20b6") : secret "marketplace-operator-metrics" not found Mar 13 10:35:38.453046 master-0 kubenswrapper[4091]: E0313 10:35:38.453006 4091 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:38.453091 master-0 kubenswrapper[4091]: E0313 10:35:38.453061 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls podName:7667717b-fb74-456b-8615-16475cb69e98 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:40.453039673 +0000 UTC m=+179.141762135 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls") pod "ingress-operator-677db989d6-tzd9b" (UID: "7667717b-fb74-456b-8615-16475cb69e98") : secret "metrics-tls" not found Mar 13 10:35:38.453154 master-0 kubenswrapper[4091]: E0313 10:35:38.453126 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 10:35:38.453188 master-0 kubenswrapper[4091]: E0313 10:35:38.453171 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert podName:2afe3890-e844-4dd3-ba49-3ac9178549bf nodeName:}" failed. No retries permitted until 2026-03-13 10:35:40.453159537 +0000 UTC m=+179.141881999 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert") pod "olm-operator-d64cfc9db-rsl2h" (UID: "2afe3890-e844-4dd3-ba49-3ac9178549bf") : secret "olm-operator-serving-cert" not found Mar 13 10:35:38.453265 master-0 kubenswrapper[4091]: E0313 10:35:38.453243 4091 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 10:35:38.453302 master-0 kubenswrapper[4091]: E0313 10:35:38.453284 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs podName:95339220-324d-45e7-bdc2-e4f42fbd1d32 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:40.45327467 +0000 UTC m=+179.141997132 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs") pod "multus-admission-controller-8d675b596-d787l" (UID: "95339220-324d-45e7-bdc2-e4f42fbd1d32") : secret "multus-admission-controller-secret" not found Mar 13 10:35:39.011313 master-0 kubenswrapper[4091]: I0313 10:35:39.002794 4091 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" podStartSLOduration=134.002774835 podStartE2EDuration="2m14.002774835s" podCreationTimestamp="2026-03-13 10:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:35:39.001960673 +0000 UTC m=+177.690683135" watchObservedRunningTime="2026-03-13 10:35:39.002774835 +0000 UTC m=+177.691497297" Mar 13 10:35:39.202762 master-0 kubenswrapper[4091]: I0313 10:35:39.202689 4091 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:39.215512 master-0 kubenswrapper[4091]: I0313 10:35:39.212695 4091 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w"] Mar 13 10:35:39.236768 master-0 kubenswrapper[4091]: W0313 10:35:39.229698 4091 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ed5e77b_948b_4d94_ac9f_440ee3c07e18.slice/crio-a9a8fe1cfe7b05b245d74255fe06ada29f995a6a0f2d268f7c34ca85219321b3 WatchSource:0}: Error finding container a9a8fe1cfe7b05b245d74255fe06ada29f995a6a0f2d268f7c34ca85219321b3: Status 404 returned error can't find the container with id a9a8fe1cfe7b05b245d74255fe06ada29f995a6a0f2d268f7c34ca85219321b3 Mar 13 10:35:39.270238 master-0 kubenswrapper[4091]: I0313 10:35:39.270102 4091 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" event={"ID":"5ed5e77b-948b-4d94-ac9f-440ee3c07e18","Type":"ContainerStarted","Data":"a9a8fe1cfe7b05b245d74255fe06ada29f995a6a0f2d268f7c34ca85219321b3"} Mar 13 10:35:40.378864 master-0 kubenswrapper[4091]: I0313 10:35:40.378777 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:40.378864 master-0 kubenswrapper[4091]: I0313 10:35:40.378852 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:40.378864 master-0 kubenswrapper[4091]: I0313 10:35:40.378882 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:40.379538 master-0 kubenswrapper[4091]: I0313 10:35:40.378911 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:40.379538 master-0 kubenswrapper[4091]: E0313 10:35:40.379061 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 10:35:40.379538 master-0 kubenswrapper[4091]: E0313 10:35:40.379057 4091 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:40.379538 master-0 kubenswrapper[4091]: E0313 10:35:40.379167 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:44.37914025 +0000 UTC m=+183.067862722 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:40.379538 master-0 kubenswrapper[4091]: I0313 10:35:40.379174 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:40.379538 master-0 kubenswrapper[4091]: E0313 10:35:40.379242 4091 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 10:35:40.379538 master-0 kubenswrapper[4091]: E0313 10:35:40.379284 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:44.379263874 +0000 UTC m=+183.067986336 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "node-tuning-operator-tls" not found Mar 13 10:35:40.379538 master-0 kubenswrapper[4091]: I0313 10:35:40.379304 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:40.379538 master-0 kubenswrapper[4091]: E0313 10:35:40.379335 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert podName:c455a959-d764-4b4f-a1e0-95c27495dd9d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:44.379328736 +0000 UTC m=+183.068051198 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert") pod "catalog-operator-7d9c49f57b-2j5jl" (UID: "c455a959-d764-4b4f-a1e0-95c27495dd9d") : secret "catalog-operator-serving-cert" not found Mar 13 10:35:40.379538 master-0 kubenswrapper[4091]: E0313 10:35:40.379485 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 10:35:40.379845 master-0 kubenswrapper[4091]: E0313 10:35:40.379653 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert podName:8a305f45-8689-45a8-8c8b-5954f2c863df nodeName:}" failed. No retries permitted until 2026-03-13 10:35:44.379626714 +0000 UTC m=+183.068349176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-d5b45" (UID: "8a305f45-8689-45a8-8c8b-5954f2c863df") : secret "package-server-manager-serving-cert" not found Mar 13 10:35:40.379845 master-0 kubenswrapper[4091]: E0313 10:35:40.379671 4091 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:40.379845 master-0 kubenswrapper[4091]: E0313 10:35:40.379721 4091 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 10:35:40.379845 master-0 kubenswrapper[4091]: E0313 10:35:40.379740 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls podName:3ff2ab1c-7057-4e18-8e32-68807f86532a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:44.379722416 +0000 UTC m=+183.068444878 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls") pod "dns-operator-589895fbb7-wjrpm" (UID: "3ff2ab1c-7057-4e18-8e32-68807f86532a") : secret "metrics-tls" not found Mar 13 10:35:40.379845 master-0 kubenswrapper[4091]: E0313 10:35:40.379759 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls podName:8cf9326b-bc23-45c2-82c4-9c08c739ac5a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:44.379750697 +0000 UTC m=+183.068473369 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-492v4" (UID: "8cf9326b-bc23-45c2-82c4-9c08c739ac5a") : secret "image-registry-operator-tls" not found Mar 13 10:35:40.379989 master-0 kubenswrapper[4091]: I0313 10:35:40.379941 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:40.380088 master-0 kubenswrapper[4091]: E0313 10:35:40.380049 4091 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:40.380125 master-0 kubenswrapper[4091]: E0313 10:35:40.380093 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls podName:4d5479f3-51ec-4b93-8188-21cdda44828d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:44.380083656 +0000 UTC m=+183.068806308 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-vk9qz" (UID: "4d5479f3-51ec-4b93-8188-21cdda44828d") : secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:40.481349 master-0 kubenswrapper[4091]: I0313 10:35:40.481161 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:40.481349 master-0 kubenswrapper[4091]: I0313 10:35:40.481248 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:40.481756 master-0 kubenswrapper[4091]: E0313 10:35:40.481397 4091 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 10:35:40.481756 master-0 kubenswrapper[4091]: E0313 10:35:40.481507 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs podName:95339220-324d-45e7-bdc2-e4f42fbd1d32 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:44.481485647 +0000 UTC m=+183.170208109 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs") pod "multus-admission-controller-8d675b596-d787l" (UID: "95339220-324d-45e7-bdc2-e4f42fbd1d32") : secret "multus-admission-controller-secret" not found Mar 13 10:35:40.481756 master-0 kubenswrapper[4091]: E0313 10:35:40.481544 4091 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 10:35:40.481756 master-0 kubenswrapper[4091]: I0313 10:35:40.481579 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:40.481756 master-0 kubenswrapper[4091]: E0313 10:35:40.481668 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics podName:66f49a19-0e3b-4611-b8a6-5f5687fa20b6 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:44.481646721 +0000 UTC m=+183.170369183 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-85x6d" (UID: "66f49a19-0e3b-4611-b8a6-5f5687fa20b6") : secret "marketplace-operator-metrics" not found Mar 13 10:35:40.481756 master-0 kubenswrapper[4091]: E0313 10:35:40.481726 4091 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:40.481965 master-0 kubenswrapper[4091]: I0313 10:35:40.481837 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:40.481965 master-0 kubenswrapper[4091]: E0313 10:35:40.481851 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls podName:7667717b-fb74-456b-8615-16475cb69e98 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:44.481835656 +0000 UTC m=+183.170558118 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls") pod "ingress-operator-677db989d6-tzd9b" (UID: "7667717b-fb74-456b-8615-16475cb69e98") : secret "metrics-tls" not found Mar 13 10:35:40.481965 master-0 kubenswrapper[4091]: E0313 10:35:40.481931 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 10:35:40.481965 master-0 kubenswrapper[4091]: E0313 10:35:40.481963 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert podName:2afe3890-e844-4dd3-ba49-3ac9178549bf nodeName:}" failed. No retries permitted until 2026-03-13 10:35:44.48195643 +0000 UTC m=+183.170678892 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert") pod "olm-operator-d64cfc9db-rsl2h" (UID: "2afe3890-e844-4dd3-ba49-3ac9178549bf") : secret "olm-operator-serving-cert" not found Mar 13 10:35:44.442512 master-0 kubenswrapper[4091]: I0313 10:35:44.442012 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:44.442512 master-0 kubenswrapper[4091]: I0313 10:35:44.442416 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:44.442512 master-0 kubenswrapper[4091]: I0313 10:35:44.442450 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:44.442512 master-0 kubenswrapper[4091]: I0313 10:35:44.442474 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:44.442512 master-0 kubenswrapper[4091]: I0313 10:35:44.442500 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:44.443290 master-0 kubenswrapper[4091]: I0313 10:35:44.442527 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:44.443290 master-0 kubenswrapper[4091]: I0313 10:35:44.442565 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:44.443290 master-0 kubenswrapper[4091]: E0313 10:35:44.442260 4091 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:44.443290 master-0 kubenswrapper[4091]: E0313 10:35:44.442868 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls podName:4d5479f3-51ec-4b93-8188-21cdda44828d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:52.442849161 +0000 UTC m=+191.131571623 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-vk9qz" (UID: "4d5479f3-51ec-4b93-8188-21cdda44828d") : secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:44.443456 master-0 kubenswrapper[4091]: E0313 10:35:44.443306 4091 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:44.443456 master-0 kubenswrapper[4091]: E0313 10:35:44.443339 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:52.443329434 +0000 UTC m=+191.132051896 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:44.443456 master-0 kubenswrapper[4091]: E0313 10:35:44.443383 4091 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:44.443456 master-0 kubenswrapper[4091]: E0313 10:35:44.443405 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls podName:3ff2ab1c-7057-4e18-8e32-68807f86532a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:52.443397846 +0000 UTC m=+191.132120308 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls") pod "dns-operator-589895fbb7-wjrpm" (UID: "3ff2ab1c-7057-4e18-8e32-68807f86532a") : secret "metrics-tls" not found Mar 13 10:35:44.443456 master-0 kubenswrapper[4091]: E0313 10:35:44.443447 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 10:35:44.443674 master-0 kubenswrapper[4091]: E0313 10:35:44.443467 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert podName:c455a959-d764-4b4f-a1e0-95c27495dd9d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:52.443460007 +0000 UTC m=+191.132182469 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert") pod "catalog-operator-7d9c49f57b-2j5jl" (UID: "c455a959-d764-4b4f-a1e0-95c27495dd9d") : secret "catalog-operator-serving-cert" not found Mar 13 10:35:44.443674 master-0 kubenswrapper[4091]: E0313 10:35:44.443509 4091 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 10:35:44.443674 master-0 kubenswrapper[4091]: E0313 10:35:44.443530 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:52.443523419 +0000 UTC m=+191.132245881 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "node-tuning-operator-tls" not found Mar 13 10:35:44.443674 master-0 kubenswrapper[4091]: E0313 10:35:44.443574 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 10:35:44.443674 master-0 kubenswrapper[4091]: E0313 10:35:44.443617 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert podName:8a305f45-8689-45a8-8c8b-5954f2c863df nodeName:}" failed. No retries permitted until 2026-03-13 10:35:52.443609091 +0000 UTC m=+191.132331553 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-d5b45" (UID: "8a305f45-8689-45a8-8c8b-5954f2c863df") : secret "package-server-manager-serving-cert" not found Mar 13 10:35:44.443674 master-0 kubenswrapper[4091]: E0313 10:35:44.442796 4091 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 10:35:44.443674 master-0 kubenswrapper[4091]: E0313 10:35:44.443641 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls podName:8cf9326b-bc23-45c2-82c4-9c08c739ac5a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:52.443634322 +0000 UTC m=+191.132356784 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-492v4" (UID: "8cf9326b-bc23-45c2-82c4-9c08c739ac5a") : secret "image-registry-operator-tls" not found Mar 13 10:35:44.561348 master-0 kubenswrapper[4091]: I0313 10:35:44.543489 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:44.561348 master-0 kubenswrapper[4091]: I0313 10:35:44.543729 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:44.561348 master-0 kubenswrapper[4091]: I0313 10:35:44.543801 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:44.561348 master-0 kubenswrapper[4091]: I0313 10:35:44.543898 4091 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:44.561348 master-0 kubenswrapper[4091]: E0313 10:35:44.561327 4091 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 10:35:44.562200 master-0 kubenswrapper[4091]: E0313 10:35:44.561424 4091 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 10:35:44.562200 master-0 kubenswrapper[4091]: E0313 10:35:44.561449 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs podName:95339220-324d-45e7-bdc2-e4f42fbd1d32 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:52.561422973 +0000 UTC m=+191.250145495 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs") pod "multus-admission-controller-8d675b596-d787l" (UID: "95339220-324d-45e7-bdc2-e4f42fbd1d32") : secret "multus-admission-controller-secret" not found Mar 13 10:35:44.562200 master-0 kubenswrapper[4091]: E0313 10:35:44.561469 4091 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:44.562200 master-0 kubenswrapper[4091]: E0313 10:35:44.561504 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics podName:66f49a19-0e3b-4611-b8a6-5f5687fa20b6 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:52.561482505 +0000 UTC m=+191.250205027 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-85x6d" (UID: "66f49a19-0e3b-4611-b8a6-5f5687fa20b6") : secret "marketplace-operator-metrics" not found Mar 13 10:35:44.562200 master-0 kubenswrapper[4091]: E0313 10:35:44.561523 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls podName:7667717b-fb74-456b-8615-16475cb69e98 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:52.561510915 +0000 UTC m=+191.250233457 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls") pod "ingress-operator-677db989d6-tzd9b" (UID: "7667717b-fb74-456b-8615-16475cb69e98") : secret "metrics-tls" not found Mar 13 10:35:44.562200 master-0 kubenswrapper[4091]: E0313 10:35:44.561883 4091 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 10:35:44.562200 master-0 kubenswrapper[4091]: E0313 10:35:44.561955 4091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert podName:2afe3890-e844-4dd3-ba49-3ac9178549bf nodeName:}" failed. No retries permitted until 2026-03-13 10:35:52.561938397 +0000 UTC m=+191.250660939 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert") pod "olm-operator-d64cfc9db-rsl2h" (UID: "2afe3890-e844-4dd3-ba49-3ac9178549bf") : secret "olm-operator-serving-cert" not found Mar 13 10:35:45.368983 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 13 10:35:45.385994 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 10:35:45.386313 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 13 10:35:45.387878 master-0 systemd[1]: kubelet.service: Consumed 11.253s CPU time. Mar 13 10:35:45.409892 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 13 10:35:45.517507 master-0 kubenswrapper[7271]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:35:45.517507 master-0 kubenswrapper[7271]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 13 10:35:45.517507 master-0 kubenswrapper[7271]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:35:45.517507 master-0 kubenswrapper[7271]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:35:45.517507 master-0 kubenswrapper[7271]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 13 10:35:45.517507 master-0 kubenswrapper[7271]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:35:45.518795 master-0 kubenswrapper[7271]: I0313 10:35:45.517637 7271 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 10:35:45.524449 master-0 kubenswrapper[7271]: W0313 10:35:45.524411 7271 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524456 7271 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524463 7271 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524468 7271 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524473 7271 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524479 7271 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524484 7271 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524488 7271 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524493 7271 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524499 7271 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524504 7271 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524508 7271 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524513 7271 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524517 7271 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524522 7271 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524527 7271 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524531 7271 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524535 7271 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524540 7271 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524544 7271 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:35:45.524540 master-0 kubenswrapper[7271]: W0313 10:35:45.524606 7271 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524611 7271 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524616 7271 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524621 7271 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524626 7271 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524631 7271 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524635 7271 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524640 7271 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524644 7271 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524649 7271 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524654 7271 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524659 7271 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524664 7271 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524680 7271 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524686 7271 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524690 7271 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524695 7271 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524700 7271 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524705 7271 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524710 7271 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:35:45.525286 master-0 kubenswrapper[7271]: W0313 10:35:45.524715 7271 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524722 7271 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524729 7271 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524735 7271 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524740 7271 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524745 7271 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524753 7271 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524760 7271 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524766 7271 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524773 7271 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524779 7271 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524785 7271 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524791 7271 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524797 7271 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524803 7271 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524808 7271 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524813 7271 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524818 7271 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524823 7271 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:35:45.526067 master-0 kubenswrapper[7271]: W0313 10:35:45.524828 7271 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: W0313 10:35:45.524833 7271 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: W0313 10:35:45.524841 7271 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: W0313 10:35:45.524848 7271 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: W0313 10:35:45.524853 7271 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: W0313 10:35:45.524858 7271 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: W0313 10:35:45.524863 7271 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: W0313 10:35:45.524868 7271 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: W0313 10:35:45.524875 7271 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: W0313 10:35:45.524880 7271 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: W0313 10:35:45.524886 7271 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: W0313 10:35:45.524892 7271 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: W0313 10:35:45.524897 7271 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: I0313 10:35:45.525064 7271 flags.go:64] FLAG: --address="0.0.0.0" Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: I0313 10:35:45.525082 7271 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: I0313 10:35:45.525093 7271 flags.go:64] FLAG: --anonymous-auth="true" Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: I0313 10:35:45.525102 7271 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: I0313 10:35:45.525128 7271 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: I0313 10:35:45.525136 7271 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: I0313 10:35:45.525147 7271 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 13 10:35:45.526736 master-0 kubenswrapper[7271]: I0313 10:35:45.525155 7271 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525164 7271 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525170 7271 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525180 7271 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525187 7271 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525194 7271 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525201 7271 flags.go:64] FLAG: --cgroup-root="" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525207 7271 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525213 7271 flags.go:64] FLAG: --client-ca-file="" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525220 7271 flags.go:64] FLAG: --cloud-config="" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525226 7271 flags.go:64] FLAG: --cloud-provider="" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525232 7271 flags.go:64] FLAG: --cluster-dns="[]" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525244 7271 flags.go:64] FLAG: --cluster-domain="" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525262 7271 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525268 7271 flags.go:64] FLAG: --config-dir="" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525274 7271 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525281 7271 flags.go:64] FLAG: --container-log-max-files="5" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525290 7271 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525297 7271 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525303 7271 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525309 7271 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525314 7271 flags.go:64] FLAG: --contention-profiling="false" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525320 7271 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525325 7271 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525332 7271 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 13 10:35:45.527482 master-0 kubenswrapper[7271]: I0313 10:35:45.525337 7271 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525345 7271 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525351 7271 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525356 7271 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525362 7271 flags.go:64] FLAG: --enable-load-reader="false" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525368 7271 flags.go:64] FLAG: --enable-server="true" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525373 7271 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525381 7271 flags.go:64] FLAG: --event-burst="100" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525388 7271 flags.go:64] FLAG: --event-qps="50" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525394 7271 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525400 7271 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525406 7271 flags.go:64] FLAG: --eviction-hard="" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525417 7271 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525423 7271 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525431 7271 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525438 7271 flags.go:64] FLAG: --eviction-soft="" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525443 7271 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525449 7271 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525455 7271 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525462 7271 flags.go:64] FLAG: --experimental-mounter-path="" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525467 7271 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525473 7271 flags.go:64] FLAG: --fail-swap-on="true" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525478 7271 flags.go:64] FLAG: --feature-gates="" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525487 7271 flags.go:64] FLAG: --file-check-frequency="20s" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525495 7271 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 13 10:35:45.528395 master-0 kubenswrapper[7271]: I0313 10:35:45.525501 7271 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525507 7271 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525526 7271 flags.go:64] FLAG: --healthz-port="10248" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525532 7271 flags.go:64] FLAG: --help="false" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525538 7271 flags.go:64] FLAG: --hostname-override="" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525544 7271 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525550 7271 flags.go:64] FLAG: --http-check-frequency="20s" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525556 7271 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525562 7271 flags.go:64] FLAG: --image-credential-provider-config="" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525568 7271 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525574 7271 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525579 7271 flags.go:64] FLAG: --image-service-endpoint="" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525622 7271 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525629 7271 flags.go:64] FLAG: --kube-api-burst="100" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525635 7271 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525642 7271 flags.go:64] FLAG: --kube-api-qps="50" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525648 7271 flags.go:64] FLAG: --kube-reserved="" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525654 7271 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525660 7271 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525668 7271 flags.go:64] FLAG: --kubelet-cgroups="" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525675 7271 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525681 7271 flags.go:64] FLAG: --lock-file="" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525687 7271 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525693 7271 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525700 7271 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525719 7271 flags.go:64] FLAG: --log-json-split-stream="false" Mar 13 10:35:45.529217 master-0 kubenswrapper[7271]: I0313 10:35:45.525726 7271 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525732 7271 flags.go:64] FLAG: --log-text-split-stream="false" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525738 7271 flags.go:64] FLAG: --logging-format="text" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525744 7271 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525751 7271 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525757 7271 flags.go:64] FLAG: --manifest-url="" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525763 7271 flags.go:64] FLAG: --manifest-url-header="" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525772 7271 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525778 7271 flags.go:64] FLAG: --max-open-files="1000000" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525786 7271 flags.go:64] FLAG: --max-pods="110" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525793 7271 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525799 7271 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525805 7271 flags.go:64] FLAG: --memory-manager-policy="None" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525811 7271 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525817 7271 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525823 7271 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525831 7271 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525849 7271 flags.go:64] FLAG: --node-status-max-images="50" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525854 7271 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525861 7271 flags.go:64] FLAG: --oom-score-adj="-999" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525867 7271 flags.go:64] FLAG: --pod-cidr="" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525874 7271 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525886 7271 flags.go:64] FLAG: --pod-manifest-path="" Mar 13 10:35:45.530208 master-0 kubenswrapper[7271]: I0313 10:35:45.525893 7271 flags.go:64] FLAG: --pod-max-pids="-1" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525899 7271 flags.go:64] FLAG: --pods-per-core="0" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525907 7271 flags.go:64] FLAG: --port="10250" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525913 7271 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525919 7271 flags.go:64] FLAG: --provider-id="" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525924 7271 flags.go:64] FLAG: --qos-reserved="" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525931 7271 flags.go:64] FLAG: --read-only-port="10255" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525938 7271 flags.go:64] FLAG: --register-node="true" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525944 7271 flags.go:64] FLAG: --register-schedulable="true" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525950 7271 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525963 7271 flags.go:64] FLAG: --registry-burst="10" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525969 7271 flags.go:64] FLAG: --registry-qps="5" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525977 7271 flags.go:64] FLAG: --reserved-cpus="" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525983 7271 flags.go:64] FLAG: --reserved-memory="" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525992 7271 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.525999 7271 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.526005 7271 flags.go:64] FLAG: --rotate-certificates="false" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.526011 7271 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.526017 7271 flags.go:64] FLAG: --runonce="false" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.526023 7271 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.526029 7271 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.526041 7271 flags.go:64] FLAG: --seccomp-default="false" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.526048 7271 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.526054 7271 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.526060 7271 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.526066 7271 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 13 10:35:45.530876 master-0 kubenswrapper[7271]: I0313 10:35:45.526072 7271 flags.go:64] FLAG: --storage-driver-password="root" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526079 7271 flags.go:64] FLAG: --storage-driver-secure="false" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526085 7271 flags.go:64] FLAG: --storage-driver-table="stats" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526091 7271 flags.go:64] FLAG: --storage-driver-user="root" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526097 7271 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526104 7271 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526110 7271 flags.go:64] FLAG: --system-cgroups="" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526116 7271 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526128 7271 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526134 7271 flags.go:64] FLAG: --tls-cert-file="" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526140 7271 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526150 7271 flags.go:64] FLAG: --tls-min-version="" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526156 7271 flags.go:64] FLAG: --tls-private-key-file="" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526163 7271 flags.go:64] FLAG: --topology-manager-policy="none" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526169 7271 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526175 7271 flags.go:64] FLAG: --topology-manager-scope="container" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526182 7271 flags.go:64] FLAG: --v="2" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526191 7271 flags.go:64] FLAG: --version="false" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526200 7271 flags.go:64] FLAG: --vmodule="" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526256 7271 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: I0313 10:35:45.526265 7271 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: W0313 10:35:45.526451 7271 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: W0313 10:35:45.526462 7271 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: W0313 10:35:45.526479 7271 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:35:45.531628 master-0 kubenswrapper[7271]: W0313 10:35:45.526485 7271 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526491 7271 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526497 7271 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526507 7271 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526513 7271 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526518 7271 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526523 7271 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526528 7271 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526536 7271 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526542 7271 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526548 7271 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526554 7271 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526566 7271 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526572 7271 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526577 7271 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526597 7271 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526612 7271 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526618 7271 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526623 7271 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:35:45.532183 master-0 kubenswrapper[7271]: W0313 10:35:45.526629 7271 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526636 7271 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526641 7271 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526646 7271 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526651 7271 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526656 7271 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526661 7271 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526667 7271 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526674 7271 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526679 7271 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526685 7271 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526691 7271 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526698 7271 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526703 7271 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526708 7271 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526715 7271 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526723 7271 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526728 7271 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526733 7271 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:35:45.533091 master-0 kubenswrapper[7271]: W0313 10:35:45.526738 7271 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526742 7271 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526748 7271 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526752 7271 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526757 7271 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526761 7271 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526768 7271 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526805 7271 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526812 7271 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526817 7271 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526822 7271 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526827 7271 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526832 7271 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526837 7271 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526842 7271 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526846 7271 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526852 7271 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526856 7271 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526861 7271 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526866 7271 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:35:45.533547 master-0 kubenswrapper[7271]: W0313 10:35:45.526871 7271 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:35:45.534279 master-0 kubenswrapper[7271]: W0313 10:35:45.526875 7271 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:35:45.534279 master-0 kubenswrapper[7271]: W0313 10:35:45.526880 7271 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:35:45.534279 master-0 kubenswrapper[7271]: W0313 10:35:45.526884 7271 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:35:45.534279 master-0 kubenswrapper[7271]: W0313 10:35:45.526889 7271 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 10:35:45.534279 master-0 kubenswrapper[7271]: W0313 10:35:45.526894 7271 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:35:45.534279 master-0 kubenswrapper[7271]: W0313 10:35:45.526899 7271 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:35:45.534279 master-0 kubenswrapper[7271]: W0313 10:35:45.526904 7271 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:35:45.534279 master-0 kubenswrapper[7271]: W0313 10:35:45.526908 7271 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:35:45.534279 master-0 kubenswrapper[7271]: W0313 10:35:45.526917 7271 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:35:45.534279 master-0 kubenswrapper[7271]: W0313 10:35:45.526922 7271 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:35:45.534279 master-0 kubenswrapper[7271]: I0313 10:35:45.526939 7271 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 10:35:45.535518 master-0 kubenswrapper[7271]: I0313 10:35:45.535414 7271 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 13 10:35:45.535518 master-0 kubenswrapper[7271]: I0313 10:35:45.535475 7271 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535556 7271 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535565 7271 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535570 7271 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535576 7271 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535581 7271 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535598 7271 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535604 7271 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535613 7271 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535618 7271 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535623 7271 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535628 7271 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535632 7271 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535638 7271 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535642 7271 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535647 7271 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535652 7271 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535657 7271 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535661 7271 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535666 7271 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:35:45.535880 master-0 kubenswrapper[7271]: W0313 10:35:45.535670 7271 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535675 7271 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535683 7271 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535689 7271 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535696 7271 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535701 7271 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535706 7271 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535711 7271 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535715 7271 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535718 7271 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535722 7271 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535726 7271 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535730 7271 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535734 7271 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535738 7271 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535744 7271 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535748 7271 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535752 7271 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535756 7271 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:35:45.536530 master-0 kubenswrapper[7271]: W0313 10:35:45.535760 7271 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535764 7271 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535768 7271 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535772 7271 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535775 7271 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535779 7271 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535784 7271 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535822 7271 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535826 7271 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535831 7271 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535836 7271 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535842 7271 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535847 7271 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535851 7271 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535856 7271 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535862 7271 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535867 7271 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535872 7271 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535876 7271 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535881 7271 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:35:45.537459 master-0 kubenswrapper[7271]: W0313 10:35:45.535886 7271 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: W0313 10:35:45.535890 7271 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: W0313 10:35:45.535895 7271 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: W0313 10:35:45.535900 7271 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: W0313 10:35:45.535904 7271 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: W0313 10:35:45.535908 7271 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: W0313 10:35:45.535912 7271 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: W0313 10:35:45.535916 7271 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: W0313 10:35:45.535920 7271 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: W0313 10:35:45.535924 7271 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: W0313 10:35:45.535928 7271 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: W0313 10:35:45.535931 7271 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: W0313 10:35:45.535936 7271 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: W0313 10:35:45.535940 7271 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: I0313 10:35:45.535947 7271 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 10:35:45.538222 master-0 kubenswrapper[7271]: W0313 10:35:45.536094 7271 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536105 7271 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536110 7271 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536114 7271 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536118 7271 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536123 7271 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536127 7271 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536132 7271 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536138 7271 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536143 7271 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536148 7271 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536152 7271 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536156 7271 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536161 7271 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536165 7271 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536170 7271 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536174 7271 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536178 7271 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536182 7271 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:35:45.538751 master-0 kubenswrapper[7271]: W0313 10:35:45.536185 7271 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536231 7271 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536237 7271 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536242 7271 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536247 7271 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536251 7271 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536255 7271 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536259 7271 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536263 7271 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536266 7271 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536270 7271 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536274 7271 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536279 7271 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536284 7271 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536289 7271 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536292 7271 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536296 7271 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536299 7271 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536303 7271 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536307 7271 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:35:45.539393 master-0 kubenswrapper[7271]: W0313 10:35:45.536311 7271 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536315 7271 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536319 7271 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536323 7271 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536327 7271 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536331 7271 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536335 7271 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536339 7271 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536343 7271 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536347 7271 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536351 7271 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536354 7271 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536361 7271 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536365 7271 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536370 7271 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536374 7271 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536378 7271 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536382 7271 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536386 7271 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536390 7271 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:35:45.540063 master-0 kubenswrapper[7271]: W0313 10:35:45.536394 7271 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:35:45.540772 master-0 kubenswrapper[7271]: W0313 10:35:45.536398 7271 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:35:45.540772 master-0 kubenswrapper[7271]: W0313 10:35:45.536403 7271 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:35:45.540772 master-0 kubenswrapper[7271]: W0313 10:35:45.536408 7271 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:35:45.540772 master-0 kubenswrapper[7271]: W0313 10:35:45.536412 7271 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:35:45.540772 master-0 kubenswrapper[7271]: W0313 10:35:45.536416 7271 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:35:45.540772 master-0 kubenswrapper[7271]: W0313 10:35:45.536420 7271 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:35:45.540772 master-0 kubenswrapper[7271]: W0313 10:35:45.536425 7271 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:35:45.540772 master-0 kubenswrapper[7271]: W0313 10:35:45.536429 7271 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:35:45.540772 master-0 kubenswrapper[7271]: W0313 10:35:45.536433 7271 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:35:45.540772 master-0 kubenswrapper[7271]: W0313 10:35:45.536438 7271 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:35:45.540772 master-0 kubenswrapper[7271]: W0313 10:35:45.536442 7271 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:35:45.540772 master-0 kubenswrapper[7271]: W0313 10:35:45.536446 7271 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:35:45.540772 master-0 kubenswrapper[7271]: I0313 10:35:45.536453 7271 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 10:35:45.540772 master-0 kubenswrapper[7271]: I0313 10:35:45.536820 7271 server.go:940] "Client rotation is on, will bootstrap in background" Mar 13 10:35:45.541232 master-0 kubenswrapper[7271]: I0313 10:35:45.539032 7271 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 13 10:35:45.541232 master-0 kubenswrapper[7271]: I0313 10:35:45.539231 7271 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 13 10:35:45.541232 master-0 kubenswrapper[7271]: I0313 10:35:45.539552 7271 server.go:997] "Starting client certificate rotation" Mar 13 10:35:45.541232 master-0 kubenswrapper[7271]: I0313 10:35:45.539629 7271 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 13 10:35:45.541232 master-0 kubenswrapper[7271]: I0313 10:35:45.539806 7271 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-14 10:24:19 +0000 UTC, rotation deadline is 2026-03-14 03:18:25.700256345 +0000 UTC Mar 13 10:35:45.541232 master-0 kubenswrapper[7271]: I0313 10:35:45.539997 7271 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 16h42m40.160263031s for next certificate rotation Mar 13 10:35:45.541232 master-0 kubenswrapper[7271]: I0313 10:35:45.540897 7271 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 10:35:45.542521 master-0 kubenswrapper[7271]: I0313 10:35:45.542443 7271 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 10:35:45.546043 master-0 kubenswrapper[7271]: I0313 10:35:45.545998 7271 log.go:25] "Validated CRI v1 runtime API" Mar 13 10:35:45.549702 master-0 kubenswrapper[7271]: I0313 10:35:45.549353 7271 log.go:25] "Validated CRI v1 image API" Mar 13 10:35:45.551117 master-0 kubenswrapper[7271]: I0313 10:35:45.551071 7271 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 10:35:45.568618 master-0 kubenswrapper[7271]: I0313 10:35:45.555080 7271 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 b89da96d-e8b7-46f7-a5b4-754b0b40734d:/dev/vda3] Mar 13 10:35:45.568888 master-0 kubenswrapper[7271]: I0313 10:35:45.555129 7271 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/03e6a6324c34d7bf4b86e7eced1bfea7054e77f627892ff596f0fda33c1d39e2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/03e6a6324c34d7bf4b86e7eced1bfea7054e77f627892ff596f0fda33c1d39e2/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/04e2d5b4e65ad4d6e19280743e00933d366bcbdfdc3c5d7c64aba41673f1a662/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/04e2d5b4e65ad4d6e19280743e00933d366bcbdfdc3c5d7c64aba41673f1a662/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/05e1407c7b27a4b6e8d757f9a77812ff8adcb8afeba6392964446e6020251829/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/05e1407c7b27a4b6e8d757f9a77812ff8adcb8afeba6392964446e6020251829/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/186ea687b2b873b969d378ad858dd467c244f19248903ea4dfa2320cfbb636aa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/186ea687b2b873b969d378ad858dd467c244f19248903ea4dfa2320cfbb636aa/userdata/shm major:0 minor:259 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1e94f5c752f1fded64a3ee340fb34a998ddce5e3acb0a9a9e83f157fbccc7394/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1e94f5c752f1fded64a3ee340fb34a998ddce5e3acb0a9a9e83f157fbccc7394/userdata/shm major:0 minor:153 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2c906939264631f5617f60445cdb650e10cc3bf3d0cf16dc4b104f010debfbc1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2c906939264631f5617f60445cdb650e10cc3bf3d0cf16dc4b104f010debfbc1/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3a800ac5d1779f65790cfc04fd054cd45e77032d228f479c2dc831649fa5ed50/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3a800ac5d1779f65790cfc04fd054cd45e77032d228f479c2dc831649fa5ed50/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3ccf2b15838415a4be52f403df345301db18d66d37f6fa09df717882bb3b0fda/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3ccf2b15838415a4be52f403df345301db18d66d37f6fa09df717882bb3b0fda/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/40bc8729edbc545950cfd4248f291a2938cf20232c66e767905dda5ad583859c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/40bc8729edbc545950cfd4248f291a2938cf20232c66e767905dda5ad583859c/userdata/shm major:0 minor:238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6ae68534d60ba95b9a0cc4c1bb4a76e1716cb7e67493f9e7d66360f5bc7a13b3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6ae68534d60ba95b9a0cc4c1bb4a76e1716cb7e67493f9e7d66360f5bc7a13b3/userdata/shm major:0 minor:247 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/72d184f62fa595f3a7463191ce616e1db275cdc732a1ab006b74065651d152d4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/72d184f62fa595f3a7463191ce616e1db275cdc732a1ab006b74065651d152d4/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f00c30651131dc152cafde2dc4f58ee3081e7ee0af524ba7783523529e49fba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f00c30651131dc152cafde2dc4f58ee3081e7ee0af524ba7783523529e49fba/userdata/shm major:0 minor:264 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f43bd68b145b0d4f8b86d52ece37d2ddf197260fbbf0dee345fc0c4e0be32ff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f43bd68b145b0d4f8b86d52ece37d2ddf197260fbbf0dee345fc0c4e0be32ff/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9681f2e75cd38c3ac67ed3e69a8ec48ca8451d34a1c4febdd60d09ed10b5be76/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9681f2e75cd38c3ac67ed3e69a8ec48ca8451d34a1c4febdd60d09ed10b5be76/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/97a747ef867987de8a139981f17a1b239fcb5c28199b67ab78094a7f8154dc7c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/97a747ef867987de8a139981f17a1b239fcb5c28199b67ab78094a7f8154dc7c/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/981440e84066752679558f1f2c3a39bee9a4847d1a094c571e2638b8a5f2290d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/981440e84066752679558f1f2c3a39bee9a4847d1a094c571e2638b8a5f2290d/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a9a8fe1cfe7b05b245d74255fe06ada29f995a6a0f2d268f7c34ca85219321b3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a9a8fe1cfe7b05b245d74255fe06ada29f995a6a0f2d268f7c34ca85219321b3/userdata/shm major:0 minor:303 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bdc4eedb705036ff81733c276b076c49a4edd20b45c63ea797578c8d980a671b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bdc4eedb705036ff81733c276b076c49a4edd20b45c63ea797578c8d980a671b/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c3ef257d3865e4ef11b927a21a93e51aafc4c9ebd98baa7d651806b2a01e30df/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c3ef257d3865e4ef11b927a21a93e51aafc4c9ebd98baa7d651806b2a01e30df/userdata/shm major:0 minor:236 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cfcbc3062b54d8acdba8fb18315546ff9d2740da776054d7f9430b71a5238353/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cfcbc3062b54d8acdba8fb18315546ff9d2740da776054d7f9430b71a5238353/userdata/shm major:0 minor:251 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d32de03a3a8ddd97f3e65197fb212f2ef727ae3a334417e63b4520866c016ec6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d32de03a3a8ddd97f3e65197fb212f2ef727ae3a334417e63b4520866c016ec6/userdata/shm major:0 minor:280 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d6da84d85f2972436dd4f3787492391fdf9d5e5e6bdc8d3e5f13761666cdfd3b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d6da84d85f2972436dd4f3787492391fdf9d5e5e6bdc8d3e5f13761666cdfd3b/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ecef9696f6ed61b901e54b92a5f3382e4d7c9cf19d60275d449ceb9924469019/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ecef9696f6ed61b901e54b92a5f3382e4d7c9cf19d60275d449ceb9924469019/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f2d2633be257a3a88aea0a33608fb20bbf5d7f015d883bb5bf430f64888b7d47/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f2d2633be257a3a88aea0a33608fb20bbf5d7f015d883bb5bf430f64888b7d47/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1434c4a2-5c4d-478a-a16a-7d6a52ea3099/volumes/kubernetes.io~projected/kube-api-access-qqjkf:{mountpoint:/var/lib/kubelet/pods/1434c4a2-5c4d-478a-a16a-7d6a52ea3099/volumes/kubernetes.io~projected/kube-api-access-qqjkf major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1434c4a2-5c4d-478a-a16a-7d6a52ea3099/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1434c4a2-5c4d-478a-a16a-7d6a52ea3099/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1c12a5d5-711f-4663-974c-c4b06e15fc39/volumes/kubernetes.io~projected/kube-api-access-cg69z:{mountpoint:/var/lib/kubelet/pods/1c12a5d5-711f-4663-974c-c4b06e15fc39/volumes/kubernetes.io~projected/kube-api-access-cg69z major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1c12a5d5-711f-4663-974c-c4b06e15fc39/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/1c12a5d5-711f-4663-974c-c4b06e15fc39/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/volumes/kubernetes.io~projected/kube-api-access-8rfpp:{mountpoint:/var/lib/kubelet/pods/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/volumes/kubernetes.io~projected/kube-api-access-8rfpp major:0 minor:99 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/volumes/kubernetes.io~projected/kube-api-access-lpdlr:{mountpoint:/var/lib/kubelet/pods/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/volumes/kubernetes.io~projected/kube-api-access-lpdlr major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2afe3890-e844-4dd3-ba49-3ac9178549bf/volumes/kubernetes.io~projected/kube-api-access-d84xk:{mountpoint:/var/lib/kubelet/pods/2afe3890-e844-4dd3-ba49-3ac9178549bf/volumes/kubernetes.io~projected/kube-api-access-d84xk major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37b2e803-302b-4650-b18f-d3d2dd703bd5/volumes/kubernetes.io~projected/kube-api-access-hp2qn:{mountpoint:/var/lib/kubelet/pods/37b2e803-302b-4650-b18f-d3d2dd703bd5/volumes/kubernetes.io~projected/kube-api-access-hp2qn major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37b2e803-302b-4650-b18f-d3d2dd703bd5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/37b2e803-302b-4650-b18f-d3d2dd703bd5/volumes/kubernetes.io~secret/serving-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ff2ab1c-7057-4e18-8e32-68807f86532a/volumes/kubernetes.io~projected/kube-api-access-8c4rc:{mountpoint:/var/lib/kubelet/pods/3ff2ab1c-7057-4e18-8e32-68807f86532a/volumes/kubernetes.io~projected/kube-api-access-8c4rc major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/42b4d53c-af72-44c8-9605-271445f95f87/volumes/kubernetes.io~projected/kube-api-access-kjcjm:{mountpoint:/var/lib/kubelet/pods/42b4d53c-af72-44c8-9605-271445f95f87/volumes/kubernetes.io~projected/kube-api-access-kjcjm major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4aaf36b4-e556-4723-a624-aa2edc69c301/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4aaf36b4-e556-4723-a624-aa2edc69c301/volumes/kubernetes.io~projected/kube-api-access major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4d5479f3-51ec-4b93-8188-21cdda44828d/volumes/kubernetes.io~projected/kube-api-access-j6xlb:{mountpoint:/var/lib/kubelet/pods/4d5479f3-51ec-4b93-8188-21cdda44828d/volumes/kubernetes.io~projected/kube-api-access-j6xlb major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~projected/kube-api-access-grplv:{mountpoint:/var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~projected/kube-api-access-grplv major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~secret/etcd-client major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5843b0d4-a538-4261-b425-598e318c9d07/volumes/kubernetes.io~projected/kube-api-access-r6nnz:{mountpoint:/var/lib/kubelet/pods/5843b0d4-a538-4261-b425-598e318c9d07/volumes/kubernetes.io~projected/kube-api-access-r6nnz major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ed5e77b-948b-4d94-ac9f-440ee3c07e18/volumes/kubernetes.io~projected/kube-api-access-22bwx:{mountpoint:/var/lib/kubelet/pods/5ed5e77b-948b-4d94-ac9f-440ee3c07e18/volumes/kubernetes.io~projected/kube-api-access-22bwx major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ed5e77b-948b-4d94-ac9f-440ee3c07e18/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5ed5e77b-948b-4d94-ac9f-440ee3c07e18/volumes/kubernetes.io~secret/serving-cert major:0 minor:302 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/66f49a19-0e3b-4611-b8a6-5f5687fa20b6/volumes/kubernetes.io~projected/kube-api-access-knkb7:{mountpoint:/var/lib/kubelet/pods/66f49a19-0e3b-4611-b8a6-5f5687fa20b6/volumes/kubernetes.io~projected/kube-api-access-knkb7 major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ed47c57-533f-43e4-88eb-07da29b4878f/volumes/kubernetes.io~projected/kube-api-access-rjk5l:{mountpoint:/var/lib/kubelet/pods/6ed47c57-533f-43e4-88eb-07da29b4878f/volumes/kubernetes.io~projected/kube-api-access-rjk5l major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ed47c57-533f-43e4-88eb-07da29b4878f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6ed47c57-533f-43e4-88eb-07da29b4878f/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~projected/kube-api-access-qd2mn:{mountpoint:/var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~projected/kube-api-access-qd2mn major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/79bb87a4-8834-4c73-834e-356ccc1f7f9b/volumes/kubernetes.io~projected/kube-api-access-56qz6:{mountpoint:/var/lib/kubelet/pods/79bb87a4-8834-4c73-834e-356ccc1f7f9b/volumes/kubernetes.io~projected/kube-api-access-56qz6 major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/866cf034-8fd8-4f16-8e9b-68627228aa8d/volumes/kubernetes.io~projected/kube-api-access-mnrlx:{mountpoint:/var/lib/kubelet/pods/866cf034-8fd8-4f16-8e9b-68627228aa8d/volumes/kubernetes.io~projected/kube-api-access-mnrlx major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba/volumes/kubernetes.io~projected/kube-api-access major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a305f45-8689-45a8-8c8b-5954f2c863df/volumes/kubernetes.io~projected/kube-api-access-zp6pp:{mountpoint:/var/lib/kubelet/pods/8a305f45-8689-45a8-8c8b-5954f2c863df/volumes/kubernetes.io~projected/kube-api-access-zp6pp major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~projected/kube-api-access-m5vcv:{mountpoint:/var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~projected/kube-api-access-m5vcv major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f9db15a-8854-485b-9863-9cbe5dddd977/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/8f9db15a-8854-485b-9863-9cbe5dddd977/volumes/kubernetes.io~projected/kube-api-access major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f9db15a-8854-485b-9863-9cbe5dddd977/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8f9db15a-8854-485b-9863-9cbe5dddd977/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/95339220-324d-45e7-bdc2-e4f42fbd1d32/volumes/kubernetes.io~projected/kube-api-access-j59zw:{mountpoint:/var/lib/kubelet/pods/95339220-324d-45e7-bdc2-e4f42fbd1d32/volumes/kubernetes.io~projected/kube-api-access-j59zw major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9aa4b44d-f202-4670-afab-44b38960026f/volumes/kubernetes.io~projected/kube-api-access-bjvtr:{mountpoint:/var/lib/kubelet/pods/9aa4b44d-f202-4670-afab-44b38960026f/volumes/kubernetes.io~projected/kube-api-access-bjvtr major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1a998af-4fc0-4078-a6a0-93dde6c00508/volumes/kubernetes.io~projected/kube-api-access-p29zg:{mountpoint:/var/lib/kubelet/pods/a1a998af-4fc0-4078-a6a0-93dde6c00508/volumes/kubernetes.io~projected/kube-api-access-p29zg major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1a998af-4fc0-4078-a6a0-93dde6c00508/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a1a998af-4fc0-4078-a6a0-93dde6c00508/volumes/kubernetes.io~secret/serving-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b12e76f4-b960-4534-90e6-a2cdbecd1728/volumes/kubernetes.io~projected/kube-api-access-xq9dl:{mountpoint:/var/lib/kubelet/pods/b12e76f4-b960-4534-90e6-a2cdbecd1728/volumes/kubernetes.io~projected/kube-api-access-xq9dl major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8d40b37-0f3d-4531-9fa8-eda965d2337d/volumes/kubernetes.io~projected/kube-api-access-l5rht:{mountpoint:/var/lib/kubelet/pods/b8d40b37-0f3d-4531-9fa8-eda965d2337d/volumes/kubernetes.io~projected/kube-api-access-l5rht major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8d40b37-0f3d-4531-9fa8-eda965d2337d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/b8d40b37-0f3d-4531-9fa8-eda965d2337d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volumes/kubernetes.io~projected/kube-api-access-vxvqn:{mountpoint:/var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volumes/kubernetes.io~projected/kube-api-access-vxvqn major:0 minor:152 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c455a959-d764-4b4f-a1e0-95c27495dd9d/volumes/kubernetes.io~projected/kube-api-access-2cpdn:{mountpoint:/var/lib/kubelet/pods/c455a959-d764-4b4f-a1e0-95c27495dd9d/volumes/kubernetes.io~projected/kube-api-access-2cpdn major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec3168fc-6c8f-4603-94e0-17b1ae22a802/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ec3168fc-6c8f-4603-94e0-17b1ae22a802/volumes/kubernetes.io~projected/kube-api-access major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec3168fc-6c8f-4603-94e0-17b1ae22a802/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ec3168fc-6c8f-4603-94e0-17b1ae22a802/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f87662b9-6ac6-44f3-8a16-ff858c2baa91/volumes/kubernetes.io~projected/kube-api-access-zk4sg:{mountpoint:/var/lib/kubelet/pods/f87662b9-6ac6-44f3-8a16-ff858c2baa91/volumes/kubernetes.io~projected/kube-api-access-zk4sg major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f87662b9-6ac6-44f3-8a16-ff858c2baa91/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/f87662b9-6ac6-44f3-8a16-ff858c2baa91/volumes/kubernetes.io~secret/webhook-cert major:0 minor:139 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/15cd2fd63cd156ecd3094d86f3104f9db079b84990e8d39b136313a4d00d0169/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/efef037baa3b2aca6c932fef29e87cd081e7d9f8a666c289364e342bc82e16ec/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/788a52383ea7cd6022311464850915f5d2d0a4868e1dc00beb1637ef79c43539/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/7e17db66c457e9895aad61f332432c5dd0af0963e1891fdf23b1ebd26585927d/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/bf554e2439fdf3ee56c8b4adcaa85968e90fb5ab701ccf41700c1426f7ee48e4/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/2add642a88e02f020308d0045430d6741693a305930c653bb02a1e27756bad73/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/d63b1aece93d203873e89d5604853a3e61c406a5344c7f8fa20fe29431471213/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/e0c6b822443411b9107d8404b82b86aa89cd61c9eb48b79c1259f732b7497dbe/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/c597e3becccf12c3a78ddd6ac9e75b1cb3e18ba7aa8ea84e9874ce88c7b9213e/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/dd5c89f91709e826379cd3844b10b017e3d12672fcd7da5d4349ca8f966a6392/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/0aa5d6a4e568b33a699c8f27499bf423d0b566e5a9c65c3dc51c7f7592e527eb/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/653d60a1f9e63fb6e065cbe1e068ab427e45faa093396e79bd846fcea7616e72/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/f07e477f7dc5fda552916b3b44a46ef62ae939ae84da9ccb9110471c7fb00ab0/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-168:{mountpoint:/var/lib/containers/storage/overlay/0b1820f56e64ffb3a423207a4565934a47c34026d403fe9a978d2f0aae5d4829/merged major:0 minor:168 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/03181025380e8c62e68a6a35467a5e8745d080a0fd540dc9ee4c567b880f3e47/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/2343ab0b0893c439477f6dcf82a8558fba897982d47e761fa8babc5b8f143d09/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/ec5561b70148038f2901ea58f74d16b1387c98a158b3054bf2dc5e015de0d9de/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/542b5cd9c5e95a84d4f365276792f329ed9c6a96b0a986f2ce5721a8bece6a24/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/feba0ff4f86be6d6979ed0b9558a7a29794b691328b803d19740952471d032b6/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/39b09c8e6c795ba871df501b357aead982bd05239f7c88cf5813b8e994f6fde2/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/41b3f03062ce5561443eb75a29b40ff983ed70bd2a397b5dbb415673ab41e9af/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-262:{mountpoint:/var/lib/containers/storage/overlay/d4367d22cb29a8e989bed91234ebff4bf1225a4d1b58fe54d4496d26bc721431/merged major:0 minor:262 fsType:overlay blockSize:0} overlay_0-266:{mountpoint:/var/lib/containers/storage/overlay/abe5a50d69e7f250d0dfe249c5fc909db06c3a3d4d79d2860360152aac251baf/merged major:0 minor:266 fsType:overlay blockSize:0} overlay_0-268:{mountpoint:/var/lib/containers/storage/overlay/73ece89ffb90c91cbe6ba0c4c048014b2766e9c3d3db5a9f447bea2d45328894/merged major:0 minor:268 fsType:overlay blockSize:0} overlay_0-272:{mountpoint:/var/lib/containers/storage/overlay/91dd884f93ef7a68593d246f1901220282b9123b6007bca6b2dd054f9073eebd/merged major:0 minor:272 fsType:overlay blockSize:0} overlay_0-278:{mountpoint:/var/lib/containers/storage/overlay/1f133c8552c58fd89f515ca78e658ac759204ca1ffb475f78b354dee4a93905f/merged major:0 minor:278 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/f2151ec939cb5904ee34f88ca0e354dde34b24135df8afdce54d4ed0e52240be/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-284:{mountpoint:/var/lib/containers/storage/overlay/0d7baa00cc913c1f49d5ebc952cf16a26f23281ca3827bb7df6db40240a9c467/merged major:0 minor:284 fsType:overlay blockSize:0} overlay_0-286:{mountpoint:/var/lib/containers/storage/overlay/47c5f226446daf16de682350249a9878093b699df65e600ecd2fa09d852645a6/merged major:0 minor:286 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/013f5d067642c94b9197094c393116f7dabc8c09c3aa204876818087a4f3f848/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/8b7c4bbddcedff3b1be56be44f17102a2570ba3a2f224f0cbdf2ce438a978f7a/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/0279774a67033301b7f7ab66078b8f3be5116970d17b45e04f5a026715a32319/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/03e0fabd7cf8e6e72f9fab026c15b3f4f0275252a1b37c40f934150384c6e5f5/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-300:{mountpoint:/var/lib/containers/storage/overlay/4c5866b18fd27945da0f8a53206b74156a6610997a9124b391ac7fa6543f64d4/merged major:0 minor:300 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/02a9a54a92d7dc09a1901c853cc39f2a6ff51369cf1e2a19036a765139eaf92b/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/96c00de96bd355ab7fe7b4c24f4080005a538a524a5660c8db6ebe00d9247485/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/ca6fedea0a3aa40646a35b57e6967b07a116ad03bcc63da95120626fa38cc2f0/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/36513d08985004e7b2c22d61ecc1fdd8da1d50ed500631d3de72f7229bd98544/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/5ba87ff8b6b2ab76da5b4028183a91ff3f0e086de1ea900014a353bfe7c0f61b/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/e975e919bcdcb71937595a2ac78d5c03bfee403988ed70af1d20e074eca1dacc/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/f755c1479b8fbd6cafa4c4e0744b1a1cab9edadf708d834a83790260ed5dc0f6/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/802541b6038cf17c746d0ea18d1245615d95d9a3ebd5fb46ce6822db9179f678/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/e02ae6db91cfea381152f23976e818553ebddebd1cc8a9248f398931371cada6/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/48c44569ec0860761a80164a33c5edd5b9cf2bb1e6ce0b845b07e126e5d1d713/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/f9208c4e90a4eefa2f38f82d7a573420f1aeff17dbb24e8032e04cf06a21c584/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/b001dba31c425b9951e8278ef5d098c4340872e9965b14d720e363876a2aa640/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/7da1fc79c150ed287334410bd6706a0165e2f5b9c752c6e662b7809d80340c05/merged major:0 minor:89 fsType:overlay blockSize:0} overlay_0-91:{mountpoint:/var/lib/containers/storage/overlay/8fd279912d859f87ac10330ab218bb3fc9659948e40155947093baa590331ef7/merged major:0 minor:91 fsType:overlay blockSize:0}] Mar 13 10:35:45.578015 master-0 kubenswrapper[7271]: I0313 10:35:45.577369 7271 manager.go:217] Machine: {Timestamp:2026-03-13 10:35:45.576296818 +0000 UTC m=+0.103119228 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:0b3c13f41020471d8d074d77a948365d SystemUUID:0b3c13f4-1020-471d-8d07-4d77a948365d BootID:8a9973c8-4daa-47e3-857d-01825c17d4bc Filesystems:[{Device:/run/containers/storage/overlay-containers/f2d2633be257a3a88aea0a33608fb20bbf5d7f015d883bb5bf430f64888b7d47/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b12e76f4-b960-4534-90e6-a2cdbecd1728/volumes/kubernetes.io~projected/kube-api-access-xq9dl DeviceMajor:0 DeviceMinor:261 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/981440e84066752679558f1f2c3a39bee9a4847d1a094c571e2638b8a5f2290d/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f00c30651131dc152cafde2dc4f58ee3081e7ee0af524ba7783523529e49fba/userdata/shm DeviceMajor:0 DeviceMinor:264 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8f9db15a-8854-485b-9863-9cbe5dddd977/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/volumes/kubernetes.io~projected/kube-api-access-lpdlr DeviceMajor:0 DeviceMinor:221 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c3ef257d3865e4ef11b927a21a93e51aafc4c9ebd98baa7d651806b2a01e30df/userdata/shm DeviceMajor:0 DeviceMinor:236 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-272 DeviceMajor:0 DeviceMinor:272 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:246 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bdc4eedb705036ff81733c276b076c49a4edd20b45c63ea797578c8d980a671b/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3ccf2b15838415a4be52f403df345301db18d66d37f6fa09df717882bb3b0fda/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/66f49a19-0e3b-4611-b8a6-5f5687fa20b6/volumes/kubernetes.io~projected/kube-api-access-knkb7 DeviceMajor:0 DeviceMinor:243 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ec3168fc-6c8f-4603-94e0-17b1ae22a802/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:245 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-268 DeviceMajor:0 DeviceMinor:268 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:220 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~projected/kube-api-access-m5vcv DeviceMajor:0 DeviceMinor:230 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~projected/kube-api-access-qd2mn DeviceMajor:0 DeviceMinor:258 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/37b2e803-302b-4650-b18f-d3d2dd703bd5/volumes/kubernetes.io~projected/kube-api-access-hp2qn DeviceMajor:0 DeviceMinor:248 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3a800ac5d1779f65790cfc04fd054cd45e77032d228f479c2dc831649fa5ed50/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/volumes/kubernetes.io~projected/kube-api-access-8rfpp DeviceMajor:0 DeviceMinor:99 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~projected/kube-api-access-grplv DeviceMajor:0 DeviceMinor:222 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/95339220-324d-45e7-bdc2-e4f42fbd1d32/volumes/kubernetes.io~projected/kube-api-access-j59zw DeviceMajor:0 DeviceMinor:250 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8f9db15a-8854-485b-9863-9cbe5dddd977/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:235 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/40bc8729edbc545950cfd4248f291a2938cf20232c66e767905dda5ad583859c/userdata/shm DeviceMajor:0 DeviceMinor:238 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/79bb87a4-8834-4c73-834e-356ccc1f7f9b/volumes/kubernetes.io~projected/kube-api-access-56qz6 DeviceMajor:0 DeviceMinor:123 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a1a998af-4fc0-4078-a6a0-93dde6c00508/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:241 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:227 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6ae68534d60ba95b9a0cc4c1bb4a76e1716cb7e67493f9e7d66360f5bc7a13b3/userdata/shm DeviceMajor:0 DeviceMinor:247 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2afe3890-e844-4dd3-ba49-3ac9178549bf/volumes/kubernetes.io~projected/kube-api-access-d84xk DeviceMajor:0 DeviceMinor:255 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/186ea687b2b873b969d378ad858dd467c244f19248903ea4dfa2320cfbb636aa/userdata/shm DeviceMajor:0 DeviceMinor:259 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/5843b0d4-a538-4261-b425-598e318c9d07/volumes/kubernetes.io~projected/kube-api-access-r6nnz DeviceMajor:0 DeviceMinor:118 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d6da84d85f2972436dd4f3787492391fdf9d5e5e6bdc8d3e5f13761666cdfd3b/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4d5479f3-51ec-4b93-8188-21cdda44828d/volumes/kubernetes.io~projected/kube-api-access-j6xlb DeviceMajor:0 DeviceMinor:224 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-262 DeviceMajor:0 DeviceMinor:262 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3ff2ab1c-7057-4e18-8e32-68807f86532a/volumes/kubernetes.io~projected/kube-api-access-8c4rc DeviceMajor:0 DeviceMinor:231 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volumes/kubernetes.io~projected/kube-api-access-vxvqn DeviceMajor:0 DeviceMinor:152 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-168 DeviceMajor:0 DeviceMinor:168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a305f45-8689-45a8-8c8b-5954f2c863df/volumes/kubernetes.io~projected/kube-api-access-zp6pp DeviceMajor:0 DeviceMinor:223 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-300 DeviceMajor:0 DeviceMinor:300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:209 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1434c4a2-5c4d-478a-a16a-7d6a52ea3099/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1434c4a2-5c4d-478a-a16a-7d6a52ea3099/volumes/kubernetes.io~projected/kube-api-access-qqjkf DeviceMajor:0 DeviceMinor:225 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c455a959-d764-4b4f-a1e0-95c27495dd9d/volumes/kubernetes.io~projected/kube-api-access-2cpdn DeviceMajor:0 DeviceMinor:229 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:129 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2c906939264631f5617f60445cdb650e10cc3bf3d0cf16dc4b104f010debfbc1/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ecef9696f6ed61b901e54b92a5f3382e4d7c9cf19d60275d449ceb9924469019/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b8d40b37-0f3d-4531-9fa8-eda965d2337d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/866cf034-8fd8-4f16-8e9b-68627228aa8d/volumes/kubernetes.io~projected/kube-api-access-mnrlx DeviceMajor:0 DeviceMinor:226 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/97a747ef867987de8a139981f17a1b239fcb5c28199b67ab78094a7f8154dc7c/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-286 DeviceMajor:0 DeviceMinor:286 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/72d184f62fa595f3a7463191ce616e1db275cdc732a1ab006b74065651d152d4/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/03e6a6324c34d7bf4b86e7eced1bfea7054e77f627892ff596f0fda33c1d39e2/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f87662b9-6ac6-44f3-8a16-ff858c2baa91/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:139 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6ed47c57-533f-43e4-88eb-07da29b4878f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/37b2e803-302b-4650-b18f-d3d2dd703bd5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:242 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f43bd68b145b0d4f8b86d52ece37d2ddf197260fbbf0dee345fc0c4e0be32ff/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5ed5e77b-948b-4d94-ac9f-440ee3c07e18/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:302 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f87662b9-6ac6-44f3-8a16-ff858c2baa91/volumes/kubernetes.io~projected/kube-api-access-zk4sg DeviceMajor:0 DeviceMinor:138 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1e94f5c752f1fded64a3ee340fb34a998ddce5e3acb0a9a9e83f157fbccc7394/userdata/shm DeviceMajor:0 DeviceMinor:153 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cfcbc3062b54d8acdba8fb18315546ff9d2740da776054d7f9430b71a5238353/userdata/shm DeviceMajor:0 DeviceMinor:251 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-266 DeviceMajor:0 DeviceMinor:266 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9aa4b44d-f202-4670-afab-44b38960026f/volumes/kubernetes.io~projected/kube-api-access-bjvtr DeviceMajor:0 DeviceMinor:105 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1c12a5d5-711f-4663-974c-c4b06e15fc39/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6ed47c57-533f-43e4-88eb-07da29b4878f/volumes/kubernetes.io~projected/kube-api-access-rjk5l DeviceMajor:0 DeviceMinor:233 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a9a8fe1cfe7b05b245d74255fe06ada29f995a6a0f2d268f7c34ca85219321b3/userdata/shm DeviceMajor:0 DeviceMinor:303 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-284 DeviceMajor:0 DeviceMinor:284 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b8d40b37-0f3d-4531-9fa8-eda965d2337d/volumes/kubernetes.io~projected/kube-api-access-l5rht DeviceMajor:0 DeviceMinor:232 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/05e1407c7b27a4b6e8d757f9a77812ff8adcb8afeba6392964446e6020251829/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/04e2d5b4e65ad4d6e19280743e00933d366bcbdfdc3c5d7c64aba41673f1a662/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9681f2e75cd38c3ac67ed3e69a8ec48ca8451d34a1c4febdd60d09ed10b5be76/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-91 DeviceMajor:0 DeviceMinor:91 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1c12a5d5-711f-4663-974c-c4b06e15fc39/volumes/kubernetes.io~projected/kube-api-access-cg69z DeviceMajor:0 DeviceMinor:125 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/ec3168fc-6c8f-4603-94e0-17b1ae22a802/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-278 DeviceMajor:0 DeviceMinor:278 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d32de03a3a8ddd97f3e65197fb212f2ef727ae3a334417e63b4520866c016ec6/userdata/shm DeviceMajor:0 DeviceMinor:280 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4aaf36b4-e556-4723-a624-aa2edc69c301/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:98 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/42b4d53c-af72-44c8-9605-271445f95f87/volumes/kubernetes.io~projected/kube-api-access-kjcjm DeviceMajor:0 DeviceMinor:228 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5ed5e77b-948b-4d94-ac9f-440ee3c07e18/volumes/kubernetes.io~projected/kube-api-access-22bwx DeviceMajor:0 DeviceMinor:234 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a1a998af-4fc0-4078-a6a0-93dde6c00508/volumes/kubernetes.io~projected/kube-api-access-p29zg DeviceMajor:0 DeviceMinor:244 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:04e2d5b4e65ad4d MacAddress:e6:63:9f:17:e4:b8 Speed:10000 Mtu:8900} {Name:186ea687b2b873b MacAddress:12:81:ce:84:f2:49 Speed:10000 Mtu:8900} {Name:40bc8729edbc545 MacAddress:2e:04:5b:fa:6e:90 Speed:10000 Mtu:8900} {Name:6ae68534d60ba95 MacAddress:b2:97:dc:e0:22:11 Speed:10000 Mtu:8900} {Name:8f00c30651131dc MacAddress:c6:73:54:b5:7d:a1 Speed:10000 Mtu:8900} {Name:8f43bd68b145b0d MacAddress:fa:17:4c:92:fc:ab Speed:10000 Mtu:8900} {Name:981440e84066752 MacAddress:5e:ff:74:d1:1d:15 Speed:10000 Mtu:8900} {Name:a9a8fe1cfe7b05b MacAddress:fe:a8:9c:cd:e2:2a Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:a2:91:ee:2b:8e:24 Speed:0 Mtu:8900} {Name:c3ef257d3865e4e MacAddress:6e:65:5e:3f:c9:33 Speed:10000 Mtu:8900} {Name:cfcbc3062b54d8a MacAddress:1e:3e:a8:5a:5e:aa Speed:10000 Mtu:8900} {Name:d32de03a3a8ddd9 MacAddress:3a:40:4b:69:29:48 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:e1:20:b5 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:f6:43:7d Speed:-1 Mtu:9000} {Name:f2d2633be257a3a MacAddress:e6:2d:a9:a6:35:3c Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:fe:76:e9:ad:1f:61 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 13 10:35:45.578488 master-0 kubenswrapper[7271]: I0313 10:35:45.578472 7271 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 13 10:35:45.578891 master-0 kubenswrapper[7271]: I0313 10:35:45.578870 7271 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 13 10:35:45.580164 master-0 kubenswrapper[7271]: I0313 10:35:45.580106 7271 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 13 10:35:45.580663 master-0 kubenswrapper[7271]: I0313 10:35:45.580542 7271 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 10:35:45.581310 master-0 kubenswrapper[7271]: I0313 10:35:45.580651 7271 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 10:35:45.581388 master-0 kubenswrapper[7271]: I0313 10:35:45.581327 7271 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 10:35:45.581388 master-0 kubenswrapper[7271]: I0313 10:35:45.581359 7271 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 10:35:45.581388 master-0 kubenswrapper[7271]: I0313 10:35:45.581370 7271 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 10:35:45.581495 master-0 kubenswrapper[7271]: I0313 10:35:45.581396 7271 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 10:35:45.581710 master-0 kubenswrapper[7271]: I0313 10:35:45.581682 7271 state_mem.go:36] "Initialized new in-memory state store" Mar 13 10:35:45.581881 master-0 kubenswrapper[7271]: I0313 10:35:45.581857 7271 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 13 10:35:45.581973 master-0 kubenswrapper[7271]: I0313 10:35:45.581949 7271 kubelet.go:418] "Attempting to sync node with API server" Mar 13 10:35:45.582020 master-0 kubenswrapper[7271]: I0313 10:35:45.581990 7271 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 10:35:45.582020 master-0 kubenswrapper[7271]: I0313 10:35:45.582007 7271 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 13 10:35:45.582020 master-0 kubenswrapper[7271]: I0313 10:35:45.582020 7271 kubelet.go:324] "Adding apiserver pod source" Mar 13 10:35:45.582141 master-0 kubenswrapper[7271]: I0313 10:35:45.582034 7271 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 10:35:45.585104 master-0 kubenswrapper[7271]: I0313 10:35:45.585056 7271 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 13 10:35:45.585405 master-0 kubenswrapper[7271]: I0313 10:35:45.585371 7271 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 13 10:35:45.585986 master-0 kubenswrapper[7271]: I0313 10:35:45.585838 7271 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 13 10:35:45.586066 master-0 kubenswrapper[7271]: I0313 10:35:45.586039 7271 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 13 10:35:45.586116 master-0 kubenswrapper[7271]: I0313 10:35:45.586070 7271 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 13 10:35:45.586116 master-0 kubenswrapper[7271]: I0313 10:35:45.586081 7271 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 13 10:35:45.586116 master-0 kubenswrapper[7271]: I0313 10:35:45.586089 7271 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 13 10:35:45.586116 master-0 kubenswrapper[7271]: I0313 10:35:45.586105 7271 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 13 10:35:45.586116 master-0 kubenswrapper[7271]: I0313 10:35:45.586115 7271 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 13 10:35:45.586116 master-0 kubenswrapper[7271]: I0313 10:35:45.586125 7271 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 13 10:35:45.586342 master-0 kubenswrapper[7271]: I0313 10:35:45.586136 7271 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 13 10:35:45.586342 master-0 kubenswrapper[7271]: I0313 10:35:45.586149 7271 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 13 10:35:45.586342 master-0 kubenswrapper[7271]: I0313 10:35:45.586159 7271 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 13 10:35:45.586342 master-0 kubenswrapper[7271]: I0313 10:35:45.586193 7271 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 13 10:35:45.586342 master-0 kubenswrapper[7271]: I0313 10:35:45.586208 7271 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 13 10:35:45.586342 master-0 kubenswrapper[7271]: I0313 10:35:45.586243 7271 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 13 10:35:45.586789 master-0 kubenswrapper[7271]: I0313 10:35:45.586763 7271 server.go:1280] "Started kubelet" Mar 13 10:35:45.587469 master-0 kubenswrapper[7271]: I0313 10:35:45.587425 7271 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 10:35:45.587683 master-0 kubenswrapper[7271]: I0313 10:35:45.587434 7271 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 10:35:45.587823 master-0 kubenswrapper[7271]: I0313 10:35:45.587794 7271 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 13 10:35:45.588083 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 13 10:35:45.588704 master-0 kubenswrapper[7271]: I0313 10:35:45.588302 7271 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 10:35:45.589375 master-0 kubenswrapper[7271]: I0313 10:35:45.589274 7271 server.go:449] "Adding debug handlers to kubelet server" Mar 13 10:35:45.598128 master-0 kubenswrapper[7271]: I0313 10:35:45.598090 7271 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 13 10:35:45.598191 master-0 kubenswrapper[7271]: I0313 10:35:45.598142 7271 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 10:35:45.598525 master-0 kubenswrapper[7271]: I0313 10:35:45.598501 7271 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 13 10:35:45.598525 master-0 kubenswrapper[7271]: I0313 10:35:45.598518 7271 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 13 10:35:45.598656 master-0 kubenswrapper[7271]: I0313 10:35:45.598637 7271 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 13 10:35:45.598761 master-0 kubenswrapper[7271]: I0313 10:35:45.598711 7271 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 10:24:19 +0000 UTC, rotation deadline is 2026-03-14 05:05:39.099261312 +0000 UTC Mar 13 10:35:45.598833 master-0 kubenswrapper[7271]: I0313 10:35:45.598819 7271 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h29m53.50044634s for next certificate rotation Mar 13 10:35:45.598957 master-0 kubenswrapper[7271]: E0313 10:35:45.598935 7271 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 10:35:45.599612 master-0 kubenswrapper[7271]: I0313 10:35:45.599544 7271 factory.go:55] Registering systemd factory Mar 13 10:35:45.599612 master-0 kubenswrapper[7271]: I0313 10:35:45.599611 7271 factory.go:221] Registration of the systemd container factory successfully Mar 13 10:35:45.599940 master-0 kubenswrapper[7271]: I0313 10:35:45.599923 7271 factory.go:153] Registering CRI-O factory Mar 13 10:35:45.600020 master-0 kubenswrapper[7271]: I0313 10:35:45.599995 7271 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 10:35:45.600108 master-0 kubenswrapper[7271]: I0313 10:35:45.599996 7271 factory.go:221] Registration of the crio container factory successfully Mar 13 10:35:45.600231 master-0 kubenswrapper[7271]: I0313 10:35:45.600128 7271 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 10:35:45.601907 master-0 kubenswrapper[7271]: I0313 10:35:45.601858 7271 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 13 10:35:45.601991 master-0 kubenswrapper[7271]: I0313 10:35:45.601912 7271 factory.go:103] Registering Raw factory Mar 13 10:35:45.603931 master-0 kubenswrapper[7271]: I0313 10:35:45.603889 7271 manager.go:1196] Started watching for new ooms in manager Mar 13 10:35:45.604096 master-0 kubenswrapper[7271]: I0313 10:35:45.602234 7271 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 10:35:45.605125 master-0 kubenswrapper[7271]: I0313 10:35:45.605100 7271 manager.go:319] Starting recovery of all containers Mar 13 10:35:45.605614 master-0 kubenswrapper[7271]: I0313 10:35:45.605508 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" volumeName="kubernetes.io/projected/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-kube-api-access-qqjkf" seLinuxMountContext="" Mar 13 10:35:45.605614 master-0 kubenswrapper[7271]: I0313 10:35:45.605606 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9db15a-8854-485b-9863-9cbe5dddd977" volumeName="kubernetes.io/configmap/8f9db15a-8854-485b-9863-9cbe5dddd977-config" seLinuxMountContext="" Mar 13 10:35:45.605738 master-0 kubenswrapper[7271]: I0313 10:35:45.605627 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f87662b9-6ac6-44f3-8a16-ff858c2baa91" volumeName="kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert" seLinuxMountContext="" Mar 13 10:35:45.605738 master-0 kubenswrapper[7271]: I0313 10:35:45.605649 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c12a5d5-711f-4663-974c-c4b06e15fc39" volumeName="kubernetes.io/projected/1c12a5d5-711f-4663-974c-c4b06e15fc39-kube-api-access-cg69z" seLinuxMountContext="" Mar 13 10:35:45.605738 master-0 kubenswrapper[7271]: I0313 10:35:45.605665 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d5f5440-b10c-40ea-9f1a-5f03babc1bd9" volumeName="kubernetes.io/secret/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-metrics-tls" seLinuxMountContext="" Mar 13 10:35:45.605738 master-0 kubenswrapper[7271]: I0313 10:35:45.605684 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2afe3890-e844-4dd3-ba49-3ac9178549bf" volumeName="kubernetes.io/projected/2afe3890-e844-4dd3-ba49-3ac9178549bf-kube-api-access-d84xk" seLinuxMountContext="" Mar 13 10:35:45.605738 master-0 kubenswrapper[7271]: I0313 10:35:45.605700 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4d5479f3-51ec-4b93-8188-21cdda44828d" volumeName="kubernetes.io/projected/4d5479f3-51ec-4b93-8188-21cdda44828d-kube-api-access-j6xlb" seLinuxMountContext="" Mar 13 10:35:45.605738 master-0 kubenswrapper[7271]: I0313 10:35:45.605722 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa4b44d-f202-4670-afab-44b38960026f" volumeName="kubernetes.io/projected/9aa4b44d-f202-4670-afab-44b38960026f-kube-api-access-bjvtr" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605759 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" volumeName="kubernetes.io/secret/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-serving-cert" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605774 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ff2ab1c-7057-4e18-8e32-68807f86532a" volumeName="kubernetes.io/projected/3ff2ab1c-7057-4e18-8e32-68807f86532a-kube-api-access-8c4rc" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605790 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4aaf36b4-e556-4723-a624-aa2edc69c301" volumeName="kubernetes.io/projected/4aaf36b4-e556-4723-a624-aa2edc69c301-kube-api-access" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605802 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5843b0d4-a538-4261-b425-598e318c9d07" volumeName="kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-whereabouts-configmap" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605822 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="66f49a19-0e3b-4611-b8a6-5f5687fa20b6" volumeName="kubernetes.io/projected/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-kube-api-access-knkb7" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605835 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95339220-324d-45e7-bdc2-e4f42fbd1d32" volumeName="kubernetes.io/projected/95339220-324d-45e7-bdc2-e4f42fbd1d32-kube-api-access-j59zw" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605850 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec3168fc-6c8f-4603-94e0-17b1ae22a802" volumeName="kubernetes.io/projected/ec3168fc-6c8f-4603-94e0-17b1ae22a802-kube-api-access" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605862 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c12a5d5-711f-4663-974c-c4b06e15fc39" volumeName="kubernetes.io/secret/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605891 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d5f5440-b10c-40ea-9f1a-5f03babc1bd9" volumeName="kubernetes.io/projected/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-kube-api-access-8rfpp" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605903 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="574bf255-14b3-40af-b240-2d3abd5b86b8" volumeName="kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-service-ca" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605915 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="574bf255-14b3-40af-b240-2d3abd5b86b8" volumeName="kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-serving-cert" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605929 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5843b0d4-a538-4261-b425-598e318c9d07" volumeName="kubernetes.io/projected/5843b0d4-a538-4261-b425-598e318c9d07-kube-api-access-r6nnz" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605941 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="866cf034-8fd8-4f16-8e9b-68627228aa8d" volumeName="kubernetes.io/projected/866cf034-8fd8-4f16-8e9b-68627228aa8d-kube-api-access-mnrlx" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605954 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86ae8cb8-72b3-4be6-9feb-ee0c0da42dba" volumeName="kubernetes.io/projected/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-kube-api-access" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605969 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa4b44d-f202-4670-afab-44b38960026f" volumeName="kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-multus-daemon-config" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.605981 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1a998af-4fc0-4078-a6a0-93dde6c00508" volumeName="kubernetes.io/secret/a1a998af-4fc0-4078-a6a0-93dde6c00508-serving-cert" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.606006 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b12e76f4-b960-4534-90e6-a2cdbecd1728" volumeName="kubernetes.io/projected/b12e76f4-b960-4534-90e6-a2cdbecd1728-kube-api-access-xq9dl" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.606018 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9624a9a-68dd-4cc1-a0a4-23fe297ceba3" volumeName="kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-script-lib" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.606050 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec3168fc-6c8f-4603-94e0-17b1ae22a802" volumeName="kubernetes.io/configmap/ec3168fc-6c8f-4603-94e0-17b1ae22a802-config" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.606068 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f87662b9-6ac6-44f3-8a16-ff858c2baa91" volumeName="kubernetes.io/projected/f87662b9-6ac6-44f3-8a16-ff858c2baa91-kube-api-access-zk4sg" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.606082 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" volumeName="kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-trusted-ca-bundle" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.606094 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="574bf255-14b3-40af-b240-2d3abd5b86b8" volumeName="kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-ca" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.606107 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ed47c57-533f-43e4-88eb-07da29b4878f" volumeName="kubernetes.io/empty-dir/6ed47c57-533f-43e4-88eb-07da29b4878f-available-featuregates" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.606118 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86ae8cb8-72b3-4be6-9feb-ee0c0da42dba" volumeName="kubernetes.io/configmap/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-config" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.606139 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c455a959-d764-4b4f-a1e0-95c27495dd9d" volumeName="kubernetes.io/projected/c455a959-d764-4b4f-a1e0-95c27495dd9d-kube-api-access-2cpdn" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.606150 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f87662b9-6ac6-44f3-8a16-ff858c2baa91" volumeName="kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-ovnkube-identity-cm" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.606165 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c12a5d5-711f-4663-974c-c4b06e15fc39" volumeName="kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovnkube-config" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.606178 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="282bc9ff-1bc0-421b-9cd3-d88d7c5e5303" volumeName="kubernetes.io/configmap/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-config" seLinuxMountContext="" Mar 13 10:35:45.606024 master-0 kubenswrapper[7271]: I0313 10:35:45.606191 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="574bf255-14b3-40af-b240-2d3abd5b86b8" volumeName="kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-client" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606203 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="66f49a19-0e3b-4611-b8a6-5f5687fa20b6" volumeName="kubernetes.io/configmap/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-trusted-ca" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606218 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a305f45-8689-45a8-8c8b-5954f2c863df" volumeName="kubernetes.io/projected/8a305f45-8689-45a8-8c8b-5954f2c863df-kube-api-access-zp6pp" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606238 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cf9326b-bc23-45c2-82c4-9c08c739ac5a" volumeName="kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-bound-sa-token" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606261 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1a998af-4fc0-4078-a6a0-93dde6c00508" volumeName="kubernetes.io/configmap/a1a998af-4fc0-4078-a6a0-93dde6c00508-config" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606285 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9624a9a-68dd-4cc1-a0a4-23fe297ceba3" volumeName="kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-config" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606297 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37b2e803-302b-4650-b18f-d3d2dd703bd5" volumeName="kubernetes.io/configmap/37b2e803-302b-4650-b18f-d3d2dd703bd5-config" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606310 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4aaf36b4-e556-4723-a624-aa2edc69c301" volumeName="kubernetes.io/configmap/4aaf36b4-e556-4723-a624-aa2edc69c301-service-ca" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606321 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9624a9a-68dd-4cc1-a0a4-23fe297ceba3" volumeName="kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-env-overrides" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606339 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec3168fc-6c8f-4603-94e0-17b1ae22a802" volumeName="kubernetes.io/secret/ec3168fc-6c8f-4603-94e0-17b1ae22a802-serving-cert" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606381 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4d5479f3-51ec-4b93-8188-21cdda44828d" volumeName="kubernetes.io/configmap/4d5479f3-51ec-4b93-8188-21cdda44828d-telemetry-config" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606391 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7667717b-fb74-456b-8615-16475cb69e98" volumeName="kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-kube-api-access-qd2mn" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606411 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8d40b37-0f3d-4531-9fa8-eda965d2337d" volumeName="kubernetes.io/secret/b8d40b37-0f3d-4531-9fa8-eda965d2337d-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606420 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f87662b9-6ac6-44f3-8a16-ff858c2baa91" volumeName="kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-env-overrides" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606437 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42b4d53c-af72-44c8-9605-271445f95f87" volumeName="kubernetes.io/projected/42b4d53c-af72-44c8-9605-271445f95f87-kube-api-access-kjcjm" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606473 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ed5e77b-948b-4d94-ac9f-440ee3c07e18" volumeName="kubernetes.io/configmap/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-config" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606507 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ed5e77b-948b-4d94-ac9f-440ee3c07e18" volumeName="kubernetes.io/secret/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-serving-cert" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606518 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa4b44d-f202-4670-afab-44b38960026f" volumeName="kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-cni-binary-copy" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606718 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8d40b37-0f3d-4531-9fa8-eda965d2337d" volumeName="kubernetes.io/empty-dir/b8d40b37-0f3d-4531-9fa8-eda965d2337d-operand-assets" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606816 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42b4d53c-af72-44c8-9605-271445f95f87" volumeName="kubernetes.io/configmap/42b4d53c-af72-44c8-9605-271445f95f87-trusted-ca" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606858 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5843b0d4-a538-4261-b425-598e318c9d07" volumeName="kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-binary-copy" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606879 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" volumeName="kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-config" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606888 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37b2e803-302b-4650-b18f-d3d2dd703bd5" volumeName="kubernetes.io/projected/37b2e803-302b-4650-b18f-d3d2dd703bd5-kube-api-access-hp2qn" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606909 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5843b0d4-a538-4261-b425-598e318c9d07" volumeName="kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-sysctl-allowlist" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606919 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7667717b-fb74-456b-8615-16475cb69e98" volumeName="kubernetes.io/configmap/7667717b-fb74-456b-8615-16475cb69e98-trusted-ca" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606936 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86ae8cb8-72b3-4be6-9feb-ee0c0da42dba" volumeName="kubernetes.io/secret/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-serving-cert" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.606996 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b12e76f4-b960-4534-90e6-a2cdbecd1728" volumeName="kubernetes.io/configmap/b12e76f4-b960-4534-90e6-a2cdbecd1728-iptables-alerter-script" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607014 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8d40b37-0f3d-4531-9fa8-eda965d2337d" volumeName="kubernetes.io/projected/b8d40b37-0f3d-4531-9fa8-eda965d2337d-kube-api-access-l5rht" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607038 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" volumeName="kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-service-ca-bundle" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607051 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="282bc9ff-1bc0-421b-9cd3-d88d7c5e5303" volumeName="kubernetes.io/projected/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-kube-api-access-lpdlr" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607060 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="282bc9ff-1bc0-421b-9cd3-d88d7c5e5303" volumeName="kubernetes.io/secret/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-serving-cert" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607076 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="574bf255-14b3-40af-b240-2d3abd5b86b8" volumeName="kubernetes.io/projected/574bf255-14b3-40af-b240-2d3abd5b86b8-kube-api-access-grplv" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607097 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7667717b-fb74-456b-8615-16475cb69e98" volumeName="kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-bound-sa-token" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607112 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cf9326b-bc23-45c2-82c4-9c08c739ac5a" volumeName="kubernetes.io/configmap/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-trusted-ca" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607132 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9624a9a-68dd-4cc1-a0a4-23fe297ceba3" volumeName="kubernetes.io/projected/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-kube-api-access-vxvqn" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607146 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9624a9a-68dd-4cc1-a0a4-23fe297ceba3" volumeName="kubernetes.io/secret/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovn-node-metrics-cert" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607197 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c12a5d5-711f-4663-974c-c4b06e15fc39" volumeName="kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-env-overrides" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607233 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ed5e77b-948b-4d94-ac9f-440ee3c07e18" volumeName="kubernetes.io/projected/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-kube-api-access-22bwx" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607246 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ed47c57-533f-43e4-88eb-07da29b4878f" volumeName="kubernetes.io/projected/6ed47c57-533f-43e4-88eb-07da29b4878f-kube-api-access-rjk5l" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607264 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="79bb87a4-8834-4c73-834e-356ccc1f7f9b" volumeName="kubernetes.io/projected/79bb87a4-8834-4c73-834e-356ccc1f7f9b-kube-api-access-56qz6" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607278 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cf9326b-bc23-45c2-82c4-9c08c739ac5a" volumeName="kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-kube-api-access-m5vcv" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607296 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9db15a-8854-485b-9863-9cbe5dddd977" volumeName="kubernetes.io/secret/8f9db15a-8854-485b-9863-9cbe5dddd977-serving-cert" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607327 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1a998af-4fc0-4078-a6a0-93dde6c00508" volumeName="kubernetes.io/projected/a1a998af-4fc0-4078-a6a0-93dde6c00508-kube-api-access-p29zg" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607337 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37b2e803-302b-4650-b18f-d3d2dd703bd5" volumeName="kubernetes.io/secret/37b2e803-302b-4650-b18f-d3d2dd703bd5-serving-cert" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607367 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="574bf255-14b3-40af-b240-2d3abd5b86b8" volumeName="kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-config" seLinuxMountContext="" Mar 13 10:35:45.607374 master-0 kubenswrapper[7271]: I0313 10:35:45.607376 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ed47c57-533f-43e4-88eb-07da29b4878f" volumeName="kubernetes.io/secret/6ed47c57-533f-43e4-88eb-07da29b4878f-serving-cert" seLinuxMountContext="" Mar 13 10:35:45.608954 master-0 kubenswrapper[7271]: I0313 10:35:45.607482 7271 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9db15a-8854-485b-9863-9cbe5dddd977" volumeName="kubernetes.io/projected/8f9db15a-8854-485b-9863-9cbe5dddd977-kube-api-access" seLinuxMountContext="" Mar 13 10:35:45.608954 master-0 kubenswrapper[7271]: I0313 10:35:45.607504 7271 reconstruct.go:97] "Volume reconstruction finished" Mar 13 10:35:45.608954 master-0 kubenswrapper[7271]: I0313 10:35:45.607511 7271 reconciler.go:26] "Reconciler: start to sync state" Mar 13 10:35:45.609543 master-0 kubenswrapper[7271]: I0313 10:35:45.609523 7271 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 13 10:35:45.643146 master-0 kubenswrapper[7271]: I0313 10:35:45.642952 7271 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 13 10:35:45.644245 master-0 kubenswrapper[7271]: I0313 10:35:45.644209 7271 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 13 10:35:45.644297 master-0 kubenswrapper[7271]: I0313 10:35:45.644279 7271 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 13 10:35:45.644359 master-0 kubenswrapper[7271]: I0313 10:35:45.644313 7271 kubelet.go:2335] "Starting kubelet main sync loop" Mar 13 10:35:45.644430 master-0 kubenswrapper[7271]: E0313 10:35:45.644363 7271 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 10:35:45.654276 master-0 kubenswrapper[7271]: I0313 10:35:45.652341 7271 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 10:35:45.663245 master-0 kubenswrapper[7271]: I0313 10:35:45.663127 7271 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="80284c850caf3e93eb3675f42a22bd510ddbac6e27d80f3eae83dafefe028254" exitCode=1 Mar 13 10:35:45.679946 master-0 kubenswrapper[7271]: I0313 10:35:45.679887 7271 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="b9afa0d6c9ded08257918288601275e200a1f5d816485290920a81d0a9149405" exitCode=0 Mar 13 10:35:45.679946 master-0 kubenswrapper[7271]: I0313 10:35:45.679939 7271 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="a71a5f7050d9b50b349f60da266053c0daef17268d0a768624b3f4f70f7f01a0" exitCode=0 Mar 13 10:35:45.679946 master-0 kubenswrapper[7271]: I0313 10:35:45.679950 7271 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="5aea8eda95c6cad12da786a1a1cc2a69af0868d380d904ea93a9398f7754ee5b" exitCode=0 Mar 13 10:35:45.680273 master-0 kubenswrapper[7271]: I0313 10:35:45.679971 7271 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="c52caffe2e52c9e9297b6c1f2ec3f7f6e6e6506eb77ca1a1569946e8d355217d" exitCode=0 Mar 13 10:35:45.680273 master-0 kubenswrapper[7271]: I0313 10:35:45.679981 7271 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="f1eb6056de76c4d6a8863b61770ab5ed8e00f850c41514ac1273f8663adc746a" exitCode=0 Mar 13 10:35:45.680273 master-0 kubenswrapper[7271]: I0313 10:35:45.679989 7271 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="1a1885581af587b9ba505d0bc5381467495165cc081fe48fe67060864afa4c7a" exitCode=0 Mar 13 10:35:45.683195 master-0 kubenswrapper[7271]: I0313 10:35:45.683111 7271 generic.go:334] "Generic (PLEG): container finished" podID="b8337424-8677-401d-8c68-b58c7d9ab99a" containerID="1bea0672139d7f4dff089e018c1c16d0afb0f3f466924f1394e930cdfd82c0f0" exitCode=0 Mar 13 10:35:45.694533 master-0 kubenswrapper[7271]: I0313 10:35:45.694484 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 10:35:45.695464 master-0 kubenswrapper[7271]: I0313 10:35:45.695417 7271 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="609f0e5551a709b73298eb7117d146c048b1a886bac85012fa0f0c1a2a1cd687" exitCode=1 Mar 13 10:35:45.695464 master-0 kubenswrapper[7271]: I0313 10:35:45.695459 7271 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="a6a13e582092662aa7c7eefb83f8515ba545374741aab1781847bd04e676290b" exitCode=0 Mar 13 10:35:45.698467 master-0 kubenswrapper[7271]: I0313 10:35:45.698431 7271 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="e07b009523c772ee55ecbb89b8fbfc4396d18404079202cc555940a21a0e5f04" exitCode=0 Mar 13 10:35:45.699791 master-0 kubenswrapper[7271]: I0313 10:35:45.699761 7271 generic.go:334] "Generic (PLEG): container finished" podID="d917075d-bc69-49b3-acab-c4d496dd04fc" containerID="4f342d2d66294bd06ac08cc498f323a859474645f1865395b674bff6a68af1e6" exitCode=0 Mar 13 10:35:45.722438 master-0 kubenswrapper[7271]: I0313 10:35:45.722036 7271 generic.go:334] "Generic (PLEG): container finished" podID="b9624a9a-68dd-4cc1-a0a4-23fe297ceba3" containerID="5a756cbc772c72bcdf3f7b55e67e0c66e077c8bc9496058fd8ad31da12ffe6d7" exitCode=0 Mar 13 10:35:45.740391 master-0 kubenswrapper[7271]: I0313 10:35:45.740337 7271 manager.go:324] Recovery completed Mar 13 10:35:45.744677 master-0 kubenswrapper[7271]: E0313 10:35:45.744616 7271 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 10:35:45.787169 master-0 kubenswrapper[7271]: I0313 10:35:45.787097 7271 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 13 10:35:45.787169 master-0 kubenswrapper[7271]: I0313 10:35:45.787157 7271 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 10:35:45.787169 master-0 kubenswrapper[7271]: I0313 10:35:45.787186 7271 state_mem.go:36] "Initialized new in-memory state store" Mar 13 10:35:45.787564 master-0 kubenswrapper[7271]: I0313 10:35:45.787530 7271 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 10:35:45.787642 master-0 kubenswrapper[7271]: I0313 10:35:45.787570 7271 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 10:35:45.787642 master-0 kubenswrapper[7271]: I0313 10:35:45.787641 7271 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 13 10:35:45.787733 master-0 kubenswrapper[7271]: I0313 10:35:45.787651 7271 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 13 10:35:45.787733 master-0 kubenswrapper[7271]: I0313 10:35:45.787661 7271 policy_none.go:49] "None policy: Start" Mar 13 10:35:45.794032 master-0 kubenswrapper[7271]: I0313 10:35:45.793989 7271 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 13 10:35:45.794032 master-0 kubenswrapper[7271]: I0313 10:35:45.794033 7271 state_mem.go:35] "Initializing new in-memory state store" Mar 13 10:35:45.794359 master-0 kubenswrapper[7271]: I0313 10:35:45.794328 7271 state_mem.go:75] "Updated machine memory state" Mar 13 10:35:45.794359 master-0 kubenswrapper[7271]: I0313 10:35:45.794347 7271 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 13 10:35:45.806015 master-0 kubenswrapper[7271]: I0313 10:35:45.805955 7271 manager.go:334] "Starting Device Plugin manager" Mar 13 10:35:45.806291 master-0 kubenswrapper[7271]: I0313 10:35:45.806029 7271 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 13 10:35:45.806291 master-0 kubenswrapper[7271]: I0313 10:35:45.806065 7271 server.go:79] "Starting device plugin registration server" Mar 13 10:35:45.806724 master-0 kubenswrapper[7271]: I0313 10:35:45.806705 7271 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 10:35:45.806782 master-0 kubenswrapper[7271]: I0313 10:35:45.806728 7271 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 10:35:45.807371 master-0 kubenswrapper[7271]: I0313 10:35:45.806936 7271 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 13 10:35:45.807371 master-0 kubenswrapper[7271]: I0313 10:35:45.807035 7271 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 13 10:35:45.807371 master-0 kubenswrapper[7271]: I0313 10:35:45.807045 7271 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 10:35:45.907789 master-0 kubenswrapper[7271]: I0313 10:35:45.907636 7271 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:35:45.910244 master-0 kubenswrapper[7271]: I0313 10:35:45.910178 7271 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:35:45.910244 master-0 kubenswrapper[7271]: I0313 10:35:45.910239 7271 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:35:45.910244 master-0 kubenswrapper[7271]: I0313 10:35:45.910251 7271 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:35:45.910412 master-0 kubenswrapper[7271]: I0313 10:35:45.910327 7271 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:35:45.921411 master-0 kubenswrapper[7271]: I0313 10:35:45.921357 7271 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 13 10:35:45.921771 master-0 kubenswrapper[7271]: I0313 10:35:45.921525 7271 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 13 10:35:45.945941 master-0 kubenswrapper[7271]: I0313 10:35:45.945805 7271 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 10:35:45.947089 master-0 kubenswrapper[7271]: I0313 10:35:45.946987 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"b25246b87fe6711f1f7c66db1d40e94041f17222319c643c72a0f13f39f94ce3"} Mar 13 10:35:45.947089 master-0 kubenswrapper[7271]: I0313 10:35:45.947075 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"e9000808717ea9c0e3216e703e0ba1564b42f55e959843c60a49ae0e4eb9a8e7"} Mar 13 10:35:45.947089 master-0 kubenswrapper[7271]: I0313 10:35:45.947088 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"80284c850caf3e93eb3675f42a22bd510ddbac6e27d80f3eae83dafefe028254"} Mar 13 10:35:45.947249 master-0 kubenswrapper[7271]: I0313 10:35:45.947104 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"ecef9696f6ed61b901e54b92a5f3382e4d7c9cf19d60275d449ceb9924469019"} Mar 13 10:35:45.947298 master-0 kubenswrapper[7271]: I0313 10:35:45.947250 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3684ce24f4407551543f74ac9f1a5ab3d105e55ba443e4519febf4f030d8826c" Mar 13 10:35:45.947345 master-0 kubenswrapper[7271]: I0313 10:35:45.947304 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"fbb8ff7febe388bbeaa88afee0edfc23ebff6c9257eadf9114a42d9cbd2b3ebe"} Mar 13 10:35:45.947389 master-0 kubenswrapper[7271]: I0313 10:35:45.947350 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"854f2604690570925e6ded05484c1d3ca69a3b566dccd4e395f158c5b0ec2a6b"} Mar 13 10:35:45.947389 master-0 kubenswrapper[7271]: I0313 10:35:45.947366 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"bdc4eedb705036ff81733c276b076c49a4edd20b45c63ea797578c8d980a671b"} Mar 13 10:35:45.947389 master-0 kubenswrapper[7271]: I0313 10:35:45.947387 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"9c670eb6abb5de03cd978fcc4efcfd81c65dafc0d610959d205735ca6df3ab91"} Mar 13 10:35:45.947498 master-0 kubenswrapper[7271]: I0313 10:35:45.947402 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"609f0e5551a709b73298eb7117d146c048b1a886bac85012fa0f0c1a2a1cd687"} Mar 13 10:35:45.947498 master-0 kubenswrapper[7271]: I0313 10:35:45.947418 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"a6a13e582092662aa7c7eefb83f8515ba545374741aab1781847bd04e676290b"} Mar 13 10:35:45.947498 master-0 kubenswrapper[7271]: I0313 10:35:45.947428 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"2c906939264631f5617f60445cdb650e10cc3bf3d0cf16dc4b104f010debfbc1"} Mar 13 10:35:45.947498 master-0 kubenswrapper[7271]: I0313 10:35:45.947440 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"0a02a1eb2e8e166b8ba4ad221ecbd690f6cbf9e334b441e5c5096bb8f331c40f"} Mar 13 10:35:45.947498 master-0 kubenswrapper[7271]: I0313 10:35:45.947450 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"ade40bcd87bcb5b50e27312debdd70388bd7803a0fa485aae78b3cece367b239"} Mar 13 10:35:45.947498 master-0 kubenswrapper[7271]: I0313 10:35:45.947460 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"e07b009523c772ee55ecbb89b8fbfc4396d18404079202cc555940a21a0e5f04"} Mar 13 10:35:45.947498 master-0 kubenswrapper[7271]: I0313 10:35:45.947472 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"9681f2e75cd38c3ac67ed3e69a8ec48ca8451d34a1c4febdd60d09ed10b5be76"} Mar 13 10:35:45.947498 master-0 kubenswrapper[7271]: I0313 10:35:45.947486 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dbe88fb4909398ce9a6240667ba14343e79180353202a50737fcc30200eae3a" Mar 13 10:35:45.947498 master-0 kubenswrapper[7271]: I0313 10:35:45.947494 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"3bbb19054cdef32aad8515587717178e7bce7c315eb6bc762119d4e27dd7a9b0"} Mar 13 10:35:45.947498 master-0 kubenswrapper[7271]: I0313 10:35:45.947505 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"3a800ac5d1779f65790cfc04fd054cd45e77032d228f479c2dc831649fa5ed50"} Mar 13 10:35:45.965449 master-0 kubenswrapper[7271]: W0313 10:35:45.965350 7271 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 13 10:35:45.965449 master-0 kubenswrapper[7271]: E0313 10:35:45.965424 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:45.965828 master-0 kubenswrapper[7271]: E0313 10:35:45.965378 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:45.965828 master-0 kubenswrapper[7271]: E0313 10:35:45.965537 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:35:45.965828 master-0 kubenswrapper[7271]: E0313 10:35:45.965385 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:35:45.965828 master-0 kubenswrapper[7271]: E0313 10:35:45.965795 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:35:46.010555 master-0 kubenswrapper[7271]: I0313 10:35:46.010485 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.010555 master-0 kubenswrapper[7271]: I0313 10:35:46.010544 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.010842 master-0 kubenswrapper[7271]: I0313 10:35:46.010606 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:35:46.010842 master-0 kubenswrapper[7271]: I0313 10:35:46.010635 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.010842 master-0 kubenswrapper[7271]: I0313 10:35:46.010655 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.010842 master-0 kubenswrapper[7271]: I0313 10:35:46.010684 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.010842 master-0 kubenswrapper[7271]: I0313 10:35:46.010724 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.010842 master-0 kubenswrapper[7271]: I0313 10:35:46.010787 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.011070 master-0 kubenswrapper[7271]: I0313 10:35:46.010914 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.011070 master-0 kubenswrapper[7271]: I0313 10:35:46.010991 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:35:46.011070 master-0 kubenswrapper[7271]: I0313 10:35:46.011024 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:35:46.011070 master-0 kubenswrapper[7271]: I0313 10:35:46.011041 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.011223 master-0 kubenswrapper[7271]: I0313 10:35:46.011100 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:35:46.011223 master-0 kubenswrapper[7271]: I0313 10:35:46.011120 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:35:46.011223 master-0 kubenswrapper[7271]: I0313 10:35:46.011156 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.011223 master-0 kubenswrapper[7271]: I0313 10:35:46.011174 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.011223 master-0 kubenswrapper[7271]: I0313 10:35:46.011205 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:35:46.112527 master-0 kubenswrapper[7271]: I0313 10:35:46.112450 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.112527 master-0 kubenswrapper[7271]: I0313 10:35:46.112509 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:35:46.112880 master-0 kubenswrapper[7271]: I0313 10:35:46.112535 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:35:46.112880 master-0 kubenswrapper[7271]: I0313 10:35:46.112628 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:35:46.112880 master-0 kubenswrapper[7271]: I0313 10:35:46.112664 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.112880 master-0 kubenswrapper[7271]: I0313 10:35:46.112704 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.112880 master-0 kubenswrapper[7271]: I0313 10:35:46.112684 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:35:46.112880 master-0 kubenswrapper[7271]: I0313 10:35:46.112715 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.112880 master-0 kubenswrapper[7271]: I0313 10:35:46.112735 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:35:46.112880 master-0 kubenswrapper[7271]: I0313 10:35:46.112790 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:35:46.112880 master-0 kubenswrapper[7271]: I0313 10:35:46.112814 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.112880 master-0 kubenswrapper[7271]: I0313 10:35:46.112837 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.112880 master-0 kubenswrapper[7271]: I0313 10:35:46.112845 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:35:46.112880 master-0 kubenswrapper[7271]: I0313 10:35:46.112869 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:35:46.112880 master-0 kubenswrapper[7271]: I0313 10:35:46.112883 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.112891 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.112897 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.112919 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.112938 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.112958 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.112974 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.113012 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.113011 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.113045 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.113070 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.113070 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.113090 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.113101 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.113118 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.113120 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.113141 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.113168 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.113205 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.113698 master-0 kubenswrapper[7271]: I0313 10:35:46.113206 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.278216 master-0 kubenswrapper[7271]: I0313 10:35:46.278037 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.282506 master-0 kubenswrapper[7271]: I0313 10:35:46.282454 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:46.327147 master-0 kubenswrapper[7271]: I0313 10:35:46.327054 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.331353 master-0 kubenswrapper[7271]: I0313 10:35:46.331314 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:46.583145 master-0 kubenswrapper[7271]: I0313 10:35:46.583072 7271 apiserver.go:52] "Watching apiserver" Mar 13 10:35:46.591418 master-0 kubenswrapper[7271]: I0313 10:35:46.591366 7271 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 10:35:46.592681 master-0 kubenswrapper[7271]: I0313 10:35:46.592632 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz","openshift-multus/multus-admission-controller-8d675b596-d787l","openshift-network-operator/network-operator-7c649bf6d4-6vpl4","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh","kube-system/bootstrap-kube-controller-manager-master-0","openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh","openshift-ingress-operator/ingress-operator-677db989d6-tzd9b","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74","openshift-multus/multus-additional-cni-plugins-mc5nc","openshift-multus/multus-qng6t","openshift-network-diagnostics/network-check-target-96vwf","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn","kube-system/bootstrap-kube-scheduler-master-0","openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z","openshift-network-operator/iptables-alerter-gdjjd","openshift-dns-operator/dns-operator-589895fbb7-wjrpm","openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr","openshift-etcd/etcd-master-0-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl","openshift-marketplace/marketplace-operator-64bf9778cb-85x6d","openshift-network-node-identity/network-node-identity-9z8mk","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl","openshift-ovn-kubernetes/ovnkube-node-hztqp","assisted-installer/assisted-installer-controller-s68gq","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-multus/network-metrics-daemon-jz2lp"] Mar 13 10:35:46.593002 master-0 kubenswrapper[7271]: I0313 10:35:46.592962 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:35:46.593097 master-0 kubenswrapper[7271]: I0313 10:35:46.593063 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:46.594401 master-0 kubenswrapper[7271]: I0313 10:35:46.594368 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:46.594509 master-0 kubenswrapper[7271]: I0313 10:35:46.594486 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:46.595216 master-0 kubenswrapper[7271]: I0313 10:35:46.595191 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:46.596376 master-0 kubenswrapper[7271]: I0313 10:35:46.596338 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.599744 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.600653 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.600312 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.601464 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.603537 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.603807 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.603825 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.603997 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.603999 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.604274 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.604307 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.604406 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.604439 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.604447 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.604281 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.604525 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.604551 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.604441 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.604663 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.604723 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 10:35:46.604922 master-0 kubenswrapper[7271]: I0313 10:35:46.604818 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 10:35:46.606443 master-0 kubenswrapper[7271]: I0313 10:35:46.606407 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 10:35:46.606509 master-0 kubenswrapper[7271]: I0313 10:35:46.604395 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 10:35:46.606678 master-0 kubenswrapper[7271]: I0313 10:35:46.606653 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 10:35:46.606798 master-0 kubenswrapper[7271]: I0313 10:35:46.606780 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 10:35:46.607141 master-0 kubenswrapper[7271]: I0313 10:35:46.607102 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:46.607637 master-0 kubenswrapper[7271]: I0313 10:35:46.607579 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:46.607909 master-0 kubenswrapper[7271]: I0313 10:35:46.607894 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:46.608007 master-0 kubenswrapper[7271]: I0313 10:35:46.607990 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:46.608317 master-0 kubenswrapper[7271]: I0313 10:35:46.608263 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:46.608554 master-0 kubenswrapper[7271]: I0313 10:35:46.608531 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 10:35:46.610143 master-0 kubenswrapper[7271]: I0313 10:35:46.609412 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:46.610781 master-0 kubenswrapper[7271]: I0313 10:35:46.610752 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 10:35:46.610988 master-0 kubenswrapper[7271]: I0313 10:35:46.610951 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 10:35:46.611052 master-0 kubenswrapper[7271]: I0313 10:35:46.610999 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 10:35:46.611092 master-0 kubenswrapper[7271]: I0313 10:35:46.611076 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 10:35:46.611301 master-0 kubenswrapper[7271]: I0313 10:35:46.611276 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 10:35:46.611412 master-0 kubenswrapper[7271]: I0313 10:35:46.611397 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 10:35:46.611483 master-0 kubenswrapper[7271]: I0313 10:35:46.611454 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 10:35:46.611523 master-0 kubenswrapper[7271]: I0313 10:35:46.611477 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 10:35:46.611555 master-0 kubenswrapper[7271]: I0313 10:35:46.611403 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 10:35:46.611601 master-0 kubenswrapper[7271]: I0313 10:35:46.611558 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 10:35:46.611631 master-0 kubenswrapper[7271]: I0313 10:35:46.611276 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 10:35:46.612089 master-0 kubenswrapper[7271]: I0313 10:35:46.612033 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 10:35:46.613470 master-0 kubenswrapper[7271]: I0313 10:35:46.613435 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 10:35:46.613530 master-0 kubenswrapper[7271]: I0313 10:35:46.613517 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 10:35:46.613660 master-0 kubenswrapper[7271]: I0313 10:35:46.613470 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 10:35:46.613711 master-0 kubenswrapper[7271]: I0313 10:35:46.613685 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 10:35:46.613821 master-0 kubenswrapper[7271]: I0313 10:35:46.613798 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 10:35:46.614522 master-0 kubenswrapper[7271]: I0313 10:35:46.613633 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 10:35:46.614522 master-0 kubenswrapper[7271]: I0313 10:35:46.614117 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 10:35:46.614522 master-0 kubenswrapper[7271]: I0313 10:35:46.614234 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 10:35:46.616685 master-0 kubenswrapper[7271]: I0313 10:35:46.616639 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 10:35:46.616931 master-0 kubenswrapper[7271]: I0313 10:35:46.616914 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 10:35:46.617070 master-0 kubenswrapper[7271]: I0313 10:35:46.617003 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:46.617070 master-0 kubenswrapper[7271]: I0313 10:35:46.617030 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 10:35:46.617205 master-0 kubenswrapper[7271]: I0313 10:35:46.617129 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 10:35:46.617205 master-0 kubenswrapper[7271]: I0313 10:35:46.617032 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec3168fc-6c8f-4603-94e0-17b1ae22a802-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:46.617372 master-0 kubenswrapper[7271]: I0313 10:35:46.617267 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 10:35:46.617372 master-0 kubenswrapper[7271]: I0313 10:35:46.617273 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5vcv\" (UniqueName: \"kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-kube-api-access-m5vcv\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:46.617372 master-0 kubenswrapper[7271]: I0313 10:35:46.617302 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-serving-cert\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:46.617372 master-0 kubenswrapper[7271]: I0313 10:35:46.617327 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-metrics-tls\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:35:46.617372 master-0 kubenswrapper[7271]: I0313 10:35:46.617349 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:46.617372 master-0 kubenswrapper[7271]: I0313 10:35:46.617374 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6ed47c57-533f-43e4-88eb-07da29b4878f-available-featuregates\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:46.618077 master-0 kubenswrapper[7271]: I0313 10:35:46.617397 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b2e803-302b-4650-b18f-d3d2dd703bd5-config\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:46.618077 master-0 kubenswrapper[7271]: I0313 10:35:46.617419 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 10:35:46.618077 master-0 kubenswrapper[7271]: I0313 10:35:46.617755 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 10:35:46.618077 master-0 kubenswrapper[7271]: I0313 10:35:46.617809 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 10:35:46.618077 master-0 kubenswrapper[7271]: I0313 10:35:46.617874 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 10:35:46.618077 master-0 kubenswrapper[7271]: I0313 10:35:46.617874 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:46.618077 master-0 kubenswrapper[7271]: I0313 10:35:46.617905 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b2e803-302b-4650-b18f-d3d2dd703bd5-config\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:46.618077 master-0 kubenswrapper[7271]: I0313 10:35:46.617418 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grplv\" (UniqueName: \"kubernetes.io/projected/574bf255-14b3-40af-b240-2d3abd5b86b8-kube-api-access-grplv\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:46.618077 master-0 kubenswrapper[7271]: I0313 10:35:46.617953 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4d5479f3-51ec-4b93-8188-21cdda44828d-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:46.618077 master-0 kubenswrapper[7271]: I0313 10:35:46.617962 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 10:35:46.618077 master-0 kubenswrapper[7271]: I0313 10:35:46.617969 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 10:35:46.618077 master-0 kubenswrapper[7271]: I0313 10:35:46.617975 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a998af-4fc0-4078-a6a0-93dde6c00508-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:46.618077 master-0 kubenswrapper[7271]: I0313 10:35:46.618009 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec3168fc-6c8f-4603-94e0-17b1ae22a802-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618101 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618182 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618210 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618222 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6ed47c57-533f-43e4-88eb-07da29b4878f-available-featuregates\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618232 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-serving-cert\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618285 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-serving-cert\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618324 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618314 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4d5479f3-51ec-4b93-8188-21cdda44828d-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618400 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-config\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618487 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618493 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-serving-cert\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618616 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618641 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-serving-cert\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618659 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-config\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618711 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-config\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618815 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-serving-cert\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618814 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p29zg\" (UniqueName: \"kubernetes.io/projected/a1a998af-4fc0-4078-a6a0-93dde6c00508-kube-api-access-p29zg\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618880 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a998af-4fc0-4078-a6a0-93dde6c00508-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618936 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp2qn\" (UniqueName: \"kubernetes.io/projected/37b2e803-302b-4650-b18f-d3d2dd703bd5-kube-api-access-hp2qn\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.618960 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-config\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.619031 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7667717b-fb74-456b-8615-16475cb69e98-trusted-ca\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.619063 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rfpp\" (UniqueName: \"kubernetes.io/projected/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-kube-api-access-8rfpp\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.619086 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ed47c57-533f-43e4-88eb-07da29b4878f-serving-cert\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.619111 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b2e803-302b-4650-b18f-d3d2dd703bd5-serving-cert\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.619116 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a998af-4fc0-4078-a6a0-93dde6c00508-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.619136 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-bound-sa-token\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.619166 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.619305 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 10:35:46.619790 master-0 kubenswrapper[7271]: I0313 10:35:46.619762 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b2e803-302b-4650-b18f-d3d2dd703bd5-serving-cert\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:46.620539 master-0 kubenswrapper[7271]: I0313 10:35:46.620152 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ed47c57-533f-43e4-88eb-07da29b4878f-serving-cert\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:46.620539 master-0 kubenswrapper[7271]: I0313 10:35:46.620394 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrlx\" (UniqueName: \"kubernetes.io/projected/866cf034-8fd8-4f16-8e9b-68627228aa8d-kube-api-access-mnrlx\") pod \"csi-snapshot-controller-operator-5685fbc7d-mfvmx\" (UID: \"866cf034-8fd8-4f16-8e9b-68627228aa8d\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx" Mar 13 10:35:46.620539 master-0 kubenswrapper[7271]: I0313 10:35:46.620453 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9db15a-8854-485b-9863-9cbe5dddd977-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:46.620539 master-0 kubenswrapper[7271]: I0313 10:35:46.620492 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-config\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:46.620693 master-0 kubenswrapper[7271]: I0313 10:35:46.620555 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqjkf\" (UniqueName: \"kubernetes.io/projected/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-kube-api-access-qqjkf\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:46.620729 master-0 kubenswrapper[7271]: I0313 10:35:46.620672 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:46.621439 master-0 kubenswrapper[7271]: I0313 10:35:46.620739 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-ca\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:46.621439 master-0 kubenswrapper[7271]: I0313 10:35:46.620835 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 10:35:46.621439 master-0 kubenswrapper[7271]: I0313 10:35:46.620871 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec3168fc-6c8f-4603-94e0-17b1ae22a802-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:46.621439 master-0 kubenswrapper[7271]: I0313 10:35:46.620956 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-ca\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:46.621439 master-0 kubenswrapper[7271]: I0313 10:35:46.621395 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec3168fc-6c8f-4603-94e0-17b1ae22a802-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:46.621857 master-0 kubenswrapper[7271]: I0313 10:35:46.621827 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:46.622018 master-0 kubenswrapper[7271]: I0313 10:35:46.621976 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d84xk\" (UniqueName: \"kubernetes.io/projected/2afe3890-e844-4dd3-ba49-3ac9178549bf-kube-api-access-d84xk\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:46.622148 master-0 kubenswrapper[7271]: I0313 10:35:46.622083 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cpdn\" (UniqueName: \"kubernetes.io/projected/c455a959-d764-4b4f-a1e0-95c27495dd9d-kube-api-access-2cpdn\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:46.622148 master-0 kubenswrapper[7271]: I0313 10:35:46.622124 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5rht\" (UniqueName: \"kubernetes.io/projected/b8d40b37-0f3d-4531-9fa8-eda965d2337d-kube-api-access-l5rht\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:46.622148 master-0 kubenswrapper[7271]: I0313 10:35:46.622146 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9db15a-8854-485b-9863-9cbe5dddd977-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:46.622301 master-0 kubenswrapper[7271]: I0313 10:35:46.622166 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd2mn\" (UniqueName: \"kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-kube-api-access-qd2mn\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:46.622301 master-0 kubenswrapper[7271]: I0313 10:35:46.622236 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d40b37-0f3d-4531-9fa8-eda965d2337d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:46.622301 master-0 kubenswrapper[7271]: I0313 10:35:46.622260 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:46.622613 master-0 kubenswrapper[7271]: I0313 10:35:46.622554 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:46.622681 master-0 kubenswrapper[7271]: I0313 10:35:46.622600 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:46.622681 master-0 kubenswrapper[7271]: I0313 10:35:46.622650 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-client\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:46.622769 master-0 kubenswrapper[7271]: I0313 10:35:46.622676 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9db15a-8854-485b-9863-9cbe5dddd977-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:46.622769 master-0 kubenswrapper[7271]: I0313 10:35:46.622739 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-config\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:46.622843 master-0 kubenswrapper[7271]: I0313 10:35:46.622807 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpdlr\" (UniqueName: \"kubernetes.io/projected/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-kube-api-access-lpdlr\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:46.622843 master-0 kubenswrapper[7271]: I0313 10:35:46.622825 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6xlb\" (UniqueName: \"kubernetes.io/projected/4d5479f3-51ec-4b93-8188-21cdda44828d-kube-api-access-j6xlb\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:46.622843 master-0 kubenswrapper[7271]: I0313 10:35:46.622843 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c4rc\" (UniqueName: \"kubernetes.io/projected/3ff2ab1c-7057-4e18-8e32-68807f86532a-kube-api-access-8c4rc\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:46.622943 master-0 kubenswrapper[7271]: I0313 10:35:46.622866 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/b8d40b37-0f3d-4531-9fa8-eda965d2337d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:46.622943 master-0 kubenswrapper[7271]: I0313 10:35:46.622884 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:46.622943 master-0 kubenswrapper[7271]: I0313 10:35:46.622901 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjcjm\" (UniqueName: \"kubernetes.io/projected/42b4d53c-af72-44c8-9605-271445f95f87-kube-api-access-kjcjm\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:46.622943 master-0 kubenswrapper[7271]: I0313 10:35:46.622922 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:46.623073 master-0 kubenswrapper[7271]: I0313 10:35:46.622987 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjk5l\" (UniqueName: \"kubernetes.io/projected/6ed47c57-533f-43e4-88eb-07da29b4878f-kube-api-access-rjk5l\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:46.623073 master-0 kubenswrapper[7271]: I0313 10:35:46.623026 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:46.623073 master-0 kubenswrapper[7271]: I0313 10:35:46.623040 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-config\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:46.623177 master-0 kubenswrapper[7271]: I0313 10:35:46.623121 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/b8d40b37-0f3d-4531-9fa8-eda965d2337d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:46.623477 master-0 kubenswrapper[7271]: I0313 10:35:46.623276 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-config\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:46.623477 master-0 kubenswrapper[7271]: I0313 10:35:46.623294 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:46.623477 master-0 kubenswrapper[7271]: I0313 10:35:46.623321 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22bwx\" (UniqueName: \"kubernetes.io/projected/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-kube-api-access-22bwx\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:46.623477 master-0 kubenswrapper[7271]: I0313 10:35:46.623364 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b4d53c-af72-44c8-9605-271445f95f87-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:46.623477 master-0 kubenswrapper[7271]: I0313 10:35:46.623405 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:46.623720 master-0 kubenswrapper[7271]: I0313 10:35:46.623549 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-host-etc-kube\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:35:46.623962 master-0 kubenswrapper[7271]: I0313 10:35:46.623749 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-config\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:46.624402 master-0 kubenswrapper[7271]: I0313 10:35:46.624360 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-client\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:46.624669 master-0 kubenswrapper[7271]: I0313 10:35:46.624631 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 10:35:46.626262 master-0 kubenswrapper[7271]: I0313 10:35:46.625140 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d40b37-0f3d-4531-9fa8-eda965d2337d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:46.626262 master-0 kubenswrapper[7271]: I0313 10:35:46.627789 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 10:35:46.626262 master-0 kubenswrapper[7271]: I0313 10:35:46.627834 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:46.626262 master-0 kubenswrapper[7271]: I0313 10:35:46.628394 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 10:35:46.630311 master-0 kubenswrapper[7271]: I0313 10:35:46.630228 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 10:35:46.630471 master-0 kubenswrapper[7271]: I0313 10:35:46.630447 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 10:35:46.630674 master-0 kubenswrapper[7271]: I0313 10:35:46.630252 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 10:35:46.630674 master-0 kubenswrapper[7271]: I0313 10:35:46.630447 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 10:35:46.630674 master-0 kubenswrapper[7271]: I0313 10:35:46.630658 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 10:35:46.630775 master-0 kubenswrapper[7271]: I0313 10:35:46.630258 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 10:35:46.630965 master-0 kubenswrapper[7271]: I0313 10:35:46.630917 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 10:35:46.631245 master-0 kubenswrapper[7271]: I0313 10:35:46.631216 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 10:35:46.632340 master-0 kubenswrapper[7271]: I0313 10:35:46.631338 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 10:35:46.632340 master-0 kubenswrapper[7271]: I0313 10:35:46.631453 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9db15a-8854-485b-9863-9cbe5dddd977-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:46.632340 master-0 kubenswrapper[7271]: I0313 10:35:46.631496 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 10:35:46.632340 master-0 kubenswrapper[7271]: I0313 10:35:46.631712 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 10:35:46.632340 master-0 kubenswrapper[7271]: I0313 10:35:46.632166 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 10:35:46.632340 master-0 kubenswrapper[7271]: I0313 10:35:46.632298 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 10:35:46.632555 master-0 kubenswrapper[7271]: I0313 10:35:46.632424 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 10:35:46.633000 master-0 kubenswrapper[7271]: I0313 10:35:46.632933 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 10:35:46.633000 master-0 kubenswrapper[7271]: I0313 10:35:46.632996 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 10:35:46.633260 master-0 kubenswrapper[7271]: I0313 10:35:46.633173 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 10:35:46.633362 master-0 kubenswrapper[7271]: I0313 10:35:46.633338 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 10:35:46.633827 master-0 kubenswrapper[7271]: I0313 10:35:46.633807 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-config\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:46.633928 master-0 kubenswrapper[7271]: I0313 10:35:46.633899 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 10:35:46.634999 master-0 kubenswrapper[7271]: I0313 10:35:46.634982 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 10:35:46.635251 master-0 kubenswrapper[7271]: I0313 10:35:46.635082 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 10:35:46.635358 master-0 kubenswrapper[7271]: I0313 10:35:46.635330 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 10:35:46.636317 master-0 kubenswrapper[7271]: I0313 10:35:46.636296 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 10:35:46.636372 master-0 kubenswrapper[7271]: I0313 10:35:46.636337 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 10:35:46.636665 master-0 kubenswrapper[7271]: I0313 10:35:46.636628 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 10:35:46.636708 master-0 kubenswrapper[7271]: I0313 10:35:46.636688 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 10:35:46.637097 master-0 kubenswrapper[7271]: I0313 10:35:46.637036 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 10:35:46.637157 master-0 kubenswrapper[7271]: I0313 10:35:46.637088 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 10:35:46.637188 master-0 kubenswrapper[7271]: I0313 10:35:46.637111 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 10:35:46.637545 master-0 kubenswrapper[7271]: I0313 10:35:46.637520 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 10:35:46.637641 master-0 kubenswrapper[7271]: I0313 10:35:46.637622 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 10:35:46.637758 master-0 kubenswrapper[7271]: I0313 10:35:46.637743 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9db15a-8854-485b-9863-9cbe5dddd977-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:46.638191 master-0 kubenswrapper[7271]: I0313 10:35:46.638147 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-metrics-tls\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:35:46.638235 master-0 kubenswrapper[7271]: I0313 10:35:46.638174 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a998af-4fc0-4078-a6a0-93dde6c00508-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:46.638277 master-0 kubenswrapper[7271]: I0313 10:35:46.638245 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 10:35:46.638374 master-0 kubenswrapper[7271]: I0313 10:35:46.638317 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 10:35:46.638567 master-0 kubenswrapper[7271]: I0313 10:35:46.638539 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec3168fc-6c8f-4603-94e0-17b1ae22a802-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:46.642251 master-0 kubenswrapper[7271]: I0313 10:35:46.642218 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 10:35:46.643443 master-0 kubenswrapper[7271]: I0313 10:35:46.643393 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 10:35:46.643823 master-0 kubenswrapper[7271]: I0313 10:35:46.643791 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:46.644901 master-0 kubenswrapper[7271]: I0313 10:35:46.644624 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b4d53c-af72-44c8-9605-271445f95f87-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:46.647661 master-0 kubenswrapper[7271]: I0313 10:35:46.645663 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 10:35:46.647661 master-0 kubenswrapper[7271]: I0313 10:35:46.647000 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 10:35:46.647661 master-0 kubenswrapper[7271]: I0313 10:35:46.649692 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7667717b-fb74-456b-8615-16475cb69e98-trusted-ca\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:46.651789 master-0 kubenswrapper[7271]: I0313 10:35:46.650670 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 10:35:46.654013 master-0 kubenswrapper[7271]: I0313 10:35:46.653865 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 10:35:46.654327 master-0 kubenswrapper[7271]: I0313 10:35:46.654205 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:46.654648 master-0 kubenswrapper[7271]: I0313 10:35:46.654625 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 10:35:46.665323 master-0 kubenswrapper[7271]: I0313 10:35:46.665280 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 10:35:46.685786 master-0 kubenswrapper[7271]: I0313 10:35:46.685695 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 10:35:46.700620 master-0 kubenswrapper[7271]: I0313 10:35:46.700562 7271 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 13 10:35:46.705451 master-0 kubenswrapper[7271]: I0313 10:35:46.705421 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 10:35:46.724231 master-0 kubenswrapper[7271]: I0313 10:35:46.724184 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.724329 master-0 kubenswrapper[7271]: I0313 10:35:46.724250 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j59zw\" (UniqueName: \"kubernetes.io/projected/95339220-324d-45e7-bdc2-e4f42fbd1d32-kube-api-access-j59zw\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:46.724329 master-0 kubenswrapper[7271]: I0313 10:35:46.724286 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-socket-dir-parent\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.724329 master-0 kubenswrapper[7271]: I0313 10:35:46.724313 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-netns\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.724458 master-0 kubenswrapper[7271]: I0313 10:35:46.724413 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-os-release\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.724495 master-0 kubenswrapper[7271]: I0313 10:35:46.724470 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-var-lib-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.724538 master-0 kubenswrapper[7271]: I0313 10:35:46.724523 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq9dl\" (UniqueName: \"kubernetes.io/projected/b12e76f4-b960-4534-90e6-a2cdbecd1728-kube-api-access-xq9dl\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:46.724577 master-0 kubenswrapper[7271]: I0313 10:35:46.724550 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gchrx\" (UniqueName: \"kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx\") pod \"network-check-target-96vwf\" (UID: \"803de28e-3b31-4ea2-9b97-87a733635a5c\") " pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:46.724787 master-0 kubenswrapper[7271]: I0313 10:35:46.724755 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-hostroot\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.724847 master-0 kubenswrapper[7271]: I0313 10:35:46.724815 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-system-cni-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.724878 master-0 kubenswrapper[7271]: I0313 10:35:46.724858 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:46.724908 master-0 kubenswrapper[7271]: I0313 10:35:46.724882 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-cni-binary-copy\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.724935 master-0 kubenswrapper[7271]: I0313 10:35:46.724908 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp6pp\" (UniqueName: \"kubernetes.io/projected/8a305f45-8689-45a8-8c8b-5954f2c863df-kube-api-access-zp6pp\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:46.724967 master-0 kubenswrapper[7271]: I0313 10:35:46.724949 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:35:46.725002 master-0 kubenswrapper[7271]: I0313 10:35:46.724984 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knkb7\" (UniqueName: \"kubernetes.io/projected/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-kube-api-access-knkb7\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:46.725084 master-0 kubenswrapper[7271]: I0313 10:35:46.725058 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b12e76f4-b960-4534-90e6-a2cdbecd1728-host-slash\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:46.725180 master-0 kubenswrapper[7271]: I0313 10:35:46.725161 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-kubelet\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.725228 master-0 kubenswrapper[7271]: I0313 10:35:46.725200 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:46.725263 master-0 kubenswrapper[7271]: I0313 10:35:46.725253 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.725292 master-0 kubenswrapper[7271]: I0313 10:35:46.725271 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovn-node-metrics-cert\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.725349 master-0 kubenswrapper[7271]: I0313 10:35:46.725316 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-multus\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.725382 master-0 kubenswrapper[7271]: I0313 10:35:46.725356 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjvtr\" (UniqueName: \"kubernetes.io/projected/9aa4b44d-f202-4670-afab-44b38960026f-kube-api-access-bjvtr\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.725423 master-0 kubenswrapper[7271]: I0313 10:35:46.725379 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:46.725423 master-0 kubenswrapper[7271]: I0313 10:35:46.725398 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-system-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.725423 master-0 kubenswrapper[7271]: I0313 10:35:46.725405 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-cni-binary-copy\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.725504 master-0 kubenswrapper[7271]: I0313 10:35:46.725412 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-k8s-cni-cncf-io\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.725504 master-0 kubenswrapper[7271]: I0313 10:35:46.725491 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-ovnkube-identity-cm\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:35:46.725557 master-0 kubenswrapper[7271]: I0313 10:35:46.725520 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-systemd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.725557 master-0 kubenswrapper[7271]: E0313 10:35:46.725534 7271 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:46.725557 master-0 kubenswrapper[7271]: I0313 10:35:46.725551 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:46.725658 master-0 kubenswrapper[7271]: E0313 10:35:46.725628 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls podName:7667717b-fb74-456b-8615-16475cb69e98 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:47.225605369 +0000 UTC m=+1.752427759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls") pod "ingress-operator-677db989d6-tzd9b" (UID: "7667717b-fb74-456b-8615-16475cb69e98") : secret "metrics-tls" not found Mar 13 10:35:46.725658 master-0 kubenswrapper[7271]: I0313 10:35:46.725647 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-netd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.725743 master-0 kubenswrapper[7271]: E0313 10:35:46.725666 7271 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 10:35:46.725743 master-0 kubenswrapper[7271]: E0313 10:35:46.725729 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:47.225707422 +0000 UTC m=+1.752529812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "node-tuning-operator-tls" not found Mar 13 10:35:46.725814 master-0 kubenswrapper[7271]: E0313 10:35:46.725785 7271 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:46.725845 master-0 kubenswrapper[7271]: E0313 10:35:46.725826 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls podName:4d5479f3-51ec-4b93-8188-21cdda44828d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:47.225815505 +0000 UTC m=+1.752638115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-vk9qz" (UID: "4d5479f3-51ec-4b93-8188-21cdda44828d") : secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:46.726202 master-0 kubenswrapper[7271]: I0313 10:35:46.726162 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-ovnkube-identity-cm\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:35:46.726297 master-0 kubenswrapper[7271]: I0313 10:35:46.725670 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56qz6\" (UniqueName: \"kubernetes.io/projected/79bb87a4-8834-4c73-834e-356ccc1f7f9b-kube-api-access-56qz6\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:46.726398 master-0 kubenswrapper[7271]: I0313 10:35:46.726372 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-multus-certs\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.726463 master-0 kubenswrapper[7271]: I0313 10:35:46.726396 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:35:46.726695 master-0 kubenswrapper[7271]: I0313 10:35:46.726646 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.726883 master-0 kubenswrapper[7271]: I0313 10:35:46.726692 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovn-node-metrics-cert\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.727079 master-0 kubenswrapper[7271]: I0313 10:35:46.727047 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.727138 master-0 kubenswrapper[7271]: I0313 10:35:46.727089 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.727138 master-0 kubenswrapper[7271]: I0313 10:35:46.727114 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:46.727138 master-0 kubenswrapper[7271]: I0313 10:35:46.727134 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-host-etc-kube\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:35:46.727230 master-0 kubenswrapper[7271]: I0313 10:35:46.727153 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-cnibin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.727230 master-0 kubenswrapper[7271]: I0313 10:35:46.727172 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-systemd-units\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.727230 master-0 kubenswrapper[7271]: I0313 10:35:46.727202 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk4sg\" (UniqueName: \"kubernetes.io/projected/f87662b9-6ac6-44f3-8a16-ff858c2baa91-kube-api-access-zk4sg\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:35:46.727325 master-0 kubenswrapper[7271]: I0313 10:35:46.727237 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxvqn\" (UniqueName: \"kubernetes.io/projected/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-kube-api-access-vxvqn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.727325 master-0 kubenswrapper[7271]: I0313 10:35:46.727265 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:46.727325 master-0 kubenswrapper[7271]: I0313 10:35:46.727283 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:46.727325 master-0 kubenswrapper[7271]: I0313 10:35:46.727298 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-netns\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.727465 master-0 kubenswrapper[7271]: I0313 10:35:46.727335 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:46.727465 master-0 kubenswrapper[7271]: I0313 10:35:46.727364 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-binary-copy\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.727465 master-0 kubenswrapper[7271]: I0313 10:35:46.727385 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-env-overrides\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:35:46.727465 master-0 kubenswrapper[7271]: I0313 10:35:46.727404 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-etc-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.727465 master-0 kubenswrapper[7271]: I0313 10:35:46.727444 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:46.727692 master-0 kubenswrapper[7271]: I0313 10:35:46.727488 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aaf36b4-e556-4723-a624-aa2edc69c301-service-ca\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:46.727800 master-0 kubenswrapper[7271]: I0313 10:35:46.727721 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aaf36b4-e556-4723-a624-aa2edc69c301-service-ca\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:46.727800 master-0 kubenswrapper[7271]: I0313 10:35:46.727739 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-host-etc-kube\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:35:46.727800 master-0 kubenswrapper[7271]: I0313 10:35:46.727766 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aaf36b4-e556-4723-a624-aa2edc69c301-kube-api-access\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:46.727800 master-0 kubenswrapper[7271]: I0313 10:35:46.727795 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-bin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.727991 master-0 kubenswrapper[7271]: I0313 10:35:46.727819 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-multus-daemon-config\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.727991 master-0 kubenswrapper[7271]: I0313 10:35:46.727846 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6nnz\" (UniqueName: \"kubernetes.io/projected/5843b0d4-a538-4261-b425-598e318c9d07-kube-api-access-r6nnz\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.727991 master-0 kubenswrapper[7271]: E0313 10:35:46.727404 7271 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 10:35:46.727991 master-0 kubenswrapper[7271]: I0313 10:35:46.727908 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.727991 master-0 kubenswrapper[7271]: E0313 10:35:46.727925 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls podName:8cf9326b-bc23-45c2-82c4-9c08c739ac5a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:47.227913351 +0000 UTC m=+1.754735961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-492v4" (UID: "8cf9326b-bc23-45c2-82c4-9c08c739ac5a") : secret "image-registry-operator-tls" not found Mar 13 10:35:46.727991 master-0 kubenswrapper[7271]: I0313 10:35:46.727963 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:46.727991 master-0 kubenswrapper[7271]: E0313 10:35:46.727990 7271 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:46.728211 master-0 kubenswrapper[7271]: E0313 10:35:46.728023 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls podName:3ff2ab1c-7057-4e18-8e32-68807f86532a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:47.228014174 +0000 UTC m=+1.754836564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls") pod "dns-operator-589895fbb7-wjrpm" (UID: "3ff2ab1c-7057-4e18-8e32-68807f86532a") : secret "metrics-tls" not found Mar 13 10:35:46.728211 master-0 kubenswrapper[7271]: I0313 10:35:46.727993 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:35:46.728211 master-0 kubenswrapper[7271]: I0313 10:35:46.728052 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg69z\" (UniqueName: \"kubernetes.io/projected/1c12a5d5-711f-4663-974c-c4b06e15fc39-kube-api-access-cg69z\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:35:46.728211 master-0 kubenswrapper[7271]: I0313 10:35:46.728074 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-os-release\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.728211 master-0 kubenswrapper[7271]: I0313 10:35:46.728151 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:46.728211 master-0 kubenswrapper[7271]: I0313 10:35:46.728155 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:35:46.728211 master-0 kubenswrapper[7271]: I0313 10:35:46.728208 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b12e76f4-b960-4534-90e6-a2cdbecd1728-iptables-alerter-script\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:46.728465 master-0 kubenswrapper[7271]: I0313 10:35:46.728258 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-cnibin\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.728465 master-0 kubenswrapper[7271]: I0313 10:35:46.728301 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-etc-kubernetes\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.728465 master-0 kubenswrapper[7271]: I0313 10:35:46.728348 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-node-log\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.728465 master-0 kubenswrapper[7271]: I0313 10:35:46.728394 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:46.728465 master-0 kubenswrapper[7271]: I0313 10:35:46.728426 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-slash\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.728465 master-0 kubenswrapper[7271]: I0313 10:35:46.728464 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-env-overrides\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.728675 master-0 kubenswrapper[7271]: I0313 10:35:46.728516 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:46.728675 master-0 kubenswrapper[7271]: I0313 10:35:46.728559 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:46.728675 master-0 kubenswrapper[7271]: E0313 10:35:46.728643 7271 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:46.728759 master-0 kubenswrapper[7271]: E0313 10:35:46.728681 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:47.228669451 +0000 UTC m=+1.755491961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:46.728759 master-0 kubenswrapper[7271]: I0313 10:35:46.728687 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:46.728817 master-0 kubenswrapper[7271]: I0313 10:35:46.728741 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-multus-daemon-config\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.728817 master-0 kubenswrapper[7271]: I0313 10:35:46.728777 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-env-overrides\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.728817 master-0 kubenswrapper[7271]: I0313 10:35:46.728756 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-config\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.728906 master-0 kubenswrapper[7271]: I0313 10:35:46.728834 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:46.728906 master-0 kubenswrapper[7271]: I0313 10:35:46.728886 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:35:46.728959 master-0 kubenswrapper[7271]: I0313 10:35:46.728913 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-ovn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.728959 master-0 kubenswrapper[7271]: I0313 10:35:46.728918 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-env-overrides\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:35:46.728959 master-0 kubenswrapper[7271]: I0313 10:35:46.728941 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-log-socket\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.729053 master-0 kubenswrapper[7271]: I0313 10:35:46.728970 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:46.729053 master-0 kubenswrapper[7271]: I0313 10:35:46.728978 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.729112 master-0 kubenswrapper[7271]: I0313 10:35:46.729052 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-config\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.729112 master-0 kubenswrapper[7271]: E0313 10:35:46.729085 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 10:35:46.729112 master-0 kubenswrapper[7271]: E0313 10:35:46.729081 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 10:35:46.729112 master-0 kubenswrapper[7271]: I0313 10:35:46.729101 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-binary-copy\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.729220 master-0 kubenswrapper[7271]: I0313 10:35:46.729148 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-conf-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.729220 master-0 kubenswrapper[7271]: E0313 10:35:46.729173 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert podName:2afe3890-e844-4dd3-ba49-3ac9178549bf nodeName:}" failed. No retries permitted until 2026-03-13 10:35:47.229159395 +0000 UTC m=+1.755982045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert") pod "olm-operator-d64cfc9db-rsl2h" (UID: "2afe3890-e844-4dd3-ba49-3ac9178549bf") : secret "olm-operator-serving-cert" not found Mar 13 10:35:46.729220 master-0 kubenswrapper[7271]: I0313 10:35:46.729212 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-bin\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.729306 master-0 kubenswrapper[7271]: I0313 10:35:46.729231 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b12e76f4-b960-4534-90e6-a2cdbecd1728-iptables-alerter-script\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:46.729306 master-0 kubenswrapper[7271]: E0313 10:35:46.729269 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert podName:c455a959-d764-4b4f-a1e0-95c27495dd9d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:47.229257537 +0000 UTC m=+1.756080137 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert") pod "catalog-operator-7d9c49f57b-2j5jl" (UID: "c455a959-d764-4b4f-a1e0-95c27495dd9d") : secret "catalog-operator-serving-cert" not found Mar 13 10:35:46.729413 master-0 kubenswrapper[7271]: I0313 10:35:46.729383 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:35:46.729645 master-0 kubenswrapper[7271]: I0313 10:35:46.729567 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:35:46.729709 master-0 kubenswrapper[7271]: I0313 10:35:46.729654 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:35:46.729709 master-0 kubenswrapper[7271]: I0313 10:35:46.729704 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-kubelet\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.729779 master-0 kubenswrapper[7271]: I0313 10:35:46.729741 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.729848 master-0 kubenswrapper[7271]: I0313 10:35:46.729826 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-script-lib\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.730027 master-0 kubenswrapper[7271]: I0313 10:35:46.729965 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.730132 master-0 kubenswrapper[7271]: I0313 10:35:46.730107 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-script-lib\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.739656 master-0 kubenswrapper[7271]: I0313 10:35:46.739471 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec3168fc-6c8f-4603-94e0-17b1ae22a802-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:35:46.761006 master-0 kubenswrapper[7271]: I0313 10:35:46.760970 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grplv\" (UniqueName: \"kubernetes.io/projected/574bf255-14b3-40af-b240-2d3abd5b86b8-kube-api-access-grplv\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:35:46.779338 master-0 kubenswrapper[7271]: I0313 10:35:46.779286 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5vcv\" (UniqueName: \"kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-kube-api-access-m5vcv\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:46.796527 master-0 kubenswrapper[7271]: I0313 10:35:46.796447 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:35:46.818114 master-0 kubenswrapper[7271]: I0313 10:35:46.818044 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.830869 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-multus\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.830923 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-systemd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.830945 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.830963 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-system-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.830980 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-k8s-cni-cncf-io\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831028 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-netd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831074 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-multus-certs\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831093 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831124 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831141 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-cnibin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831157 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-systemd-units\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831193 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831213 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-netns\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831236 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831256 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831280 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-bin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831301 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-etc-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831326 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831344 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-os-release\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831379 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-cnibin\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831406 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-etc-kubernetes\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831423 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-node-log\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831440 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831459 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-slash\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831482 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831498 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831514 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-ovn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831529 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-log-socket\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831567 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831621 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-bin\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831643 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-conf-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831693 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-kubelet\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831715 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-socket-dir-parent\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831730 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-netns\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831747 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-os-release\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831766 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-var-lib-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831783 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831806 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-hostroot\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831856 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-system-cni-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831882 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchrx\" (UniqueName: \"kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx\") pod \"network-check-target-96vwf\" (UID: \"803de28e-3b31-4ea2-9b97-87a733635a5c\") " pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831923 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b12e76f4-b960-4534-90e6-a2cdbecd1728-host-slash\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.831945 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-kubelet\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832019 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-kubelet\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832059 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-multus\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832080 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-systemd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832100 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832143 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-system-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832166 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-k8s-cni-cncf-io\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832187 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-netd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832212 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-multus-certs\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832244 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832266 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832296 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-cnibin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832317 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-systemd-units\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: E0313 10:35:46.832394 7271 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: E0313 10:35:46.832436 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:35:47.332422036 +0000 UTC m=+1.859244426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : secret "metrics-daemon-secret" not found Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832632 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-netns\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: E0313 10:35:46.832677 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: E0313 10:35:46.832701 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert podName:8a305f45-8689-45a8-8c8b-5954f2c863df nodeName:}" failed. No retries permitted until 2026-03-13 10:35:47.332693283 +0000 UTC m=+1.859515673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-d5b45" (UID: "8a305f45-8689-45a8-8c8b-5954f2c863df") : secret "package-server-manager-serving-cert" not found Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832749 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832782 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-log-socket\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832808 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-conf-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832839 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-kubelet\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832883 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-bin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.832927 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-netns\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833001 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-etc-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833041 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-slash\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833059 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-os-release\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833070 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833133 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b12e76f4-b960-4534-90e6-a2cdbecd1728-host-slash\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833135 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833164 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-bin\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833262 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-os-release\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833286 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-cnibin\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833308 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-hostroot\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833411 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-node-log\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: E0313 10:35:46.833420 7271 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833450 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-ovn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833488 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-socket-dir-parent\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833500 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-system-cni-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: E0313 10:35:46.833570 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs podName:95339220-324d-45e7-bdc2-e4f42fbd1d32 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:47.333526425 +0000 UTC m=+1.860348815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs") pod "multus-admission-controller-8d675b596-d787l" (UID: "95339220-324d-45e7-bdc2-e4f42fbd1d32") : secret "multus-admission-controller-secret" not found Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833619 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: E0313 10:35:46.833701 7271 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: E0313 10:35:46.833729 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:47.33372108 +0000 UTC m=+1.860543470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833762 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-etc-kubernetes\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: E0313 10:35:46.833885 7271 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: I0313 10:35:46.833908 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-var-lib-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:46.840562 master-0 kubenswrapper[7271]: E0313 10:35:46.833967 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics podName:66f49a19-0e3b-4611-b8a6-5f5687fa20b6 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:47.333934666 +0000 UTC m=+1.860757286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-85x6d" (UID: "66f49a19-0e3b-4611-b8a6-5f5687fa20b6") : secret "marketplace-operator-metrics" not found Mar 13 10:35:46.843745 master-0 kubenswrapper[7271]: I0313 10:35:46.841296 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p29zg\" (UniqueName: \"kubernetes.io/projected/a1a998af-4fc0-4078-a6a0-93dde6c00508-kube-api-access-p29zg\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:35:46.862121 master-0 kubenswrapper[7271]: I0313 10:35:46.862054 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp2qn\" (UniqueName: \"kubernetes.io/projected/37b2e803-302b-4650-b18f-d3d2dd703bd5-kube-api-access-hp2qn\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:35:46.875962 master-0 kubenswrapper[7271]: I0313 10:35:46.875909 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rfpp\" (UniqueName: \"kubernetes.io/projected/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-kube-api-access-8rfpp\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:35:46.899752 master-0 kubenswrapper[7271]: I0313 10:35:46.899693 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-bound-sa-token\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:46.901632 master-0 kubenswrapper[7271]: I0313 10:35:46.901130 7271 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 10:35:46.917926 master-0 kubenswrapper[7271]: I0313 10:35:46.917850 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrlx\" (UniqueName: \"kubernetes.io/projected/866cf034-8fd8-4f16-8e9b-68627228aa8d-kube-api-access-mnrlx\") pod \"csi-snapshot-controller-operator-5685fbc7d-mfvmx\" (UID: \"866cf034-8fd8-4f16-8e9b-68627228aa8d\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx" Mar 13 10:35:46.939989 master-0 kubenswrapper[7271]: I0313 10:35:46.939911 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqjkf\" (UniqueName: \"kubernetes.io/projected/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-kube-api-access-qqjkf\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:35:46.980197 master-0 kubenswrapper[7271]: I0313 10:35:46.980130 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d84xk\" (UniqueName: \"kubernetes.io/projected/2afe3890-e844-4dd3-ba49-3ac9178549bf-kube-api-access-d84xk\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:46.996078 master-0 kubenswrapper[7271]: I0313 10:35:46.996035 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5rht\" (UniqueName: \"kubernetes.io/projected/b8d40b37-0f3d-4531-9fa8-eda965d2337d-kube-api-access-l5rht\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:35:47.023899 master-0 kubenswrapper[7271]: I0313 10:35:47.023833 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cpdn\" (UniqueName: \"kubernetes.io/projected/c455a959-d764-4b4f-a1e0-95c27495dd9d-kube-api-access-2cpdn\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:47.041996 master-0 kubenswrapper[7271]: I0313 10:35:47.041905 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd2mn\" (UniqueName: \"kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-kube-api-access-qd2mn\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:47.057183 master-0 kubenswrapper[7271]: I0313 10:35:47.057103 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpdlr\" (UniqueName: \"kubernetes.io/projected/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-kube-api-access-lpdlr\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:35:47.078971 master-0 kubenswrapper[7271]: I0313 10:35:47.078892 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6xlb\" (UniqueName: \"kubernetes.io/projected/4d5479f3-51ec-4b93-8188-21cdda44828d-kube-api-access-j6xlb\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:47.099046 master-0 kubenswrapper[7271]: I0313 10:35:47.098842 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c4rc\" (UniqueName: \"kubernetes.io/projected/3ff2ab1c-7057-4e18-8e32-68807f86532a-kube-api-access-8c4rc\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:47.119065 master-0 kubenswrapper[7271]: I0313 10:35:47.119008 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22bwx\" (UniqueName: \"kubernetes.io/projected/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-kube-api-access-22bwx\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:35:47.135393 master-0 kubenswrapper[7271]: I0313 10:35:47.135352 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjcjm\" (UniqueName: \"kubernetes.io/projected/42b4d53c-af72-44c8-9605-271445f95f87-kube-api-access-kjcjm\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:47.160531 master-0 kubenswrapper[7271]: I0313 10:35:47.156785 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9db15a-8854-485b-9863-9cbe5dddd977-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:35:47.177191 master-0 kubenswrapper[7271]: I0313 10:35:47.177148 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjk5l\" (UniqueName: \"kubernetes.io/projected/6ed47c57-533f-43e4-88eb-07da29b4878f-kube-api-access-rjk5l\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:35:47.185676 master-0 kubenswrapper[7271]: E0313 10:35:47.185618 7271 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9" Mar 13 10:35:47.185916 master-0 kubenswrapper[7271]: E0313 10:35:47.185850 7271 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-storage-version-migrator-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9,Command:[cluster-kube-storage-version-migrator-operator start],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p29zg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-storage-version-migrator-operator-7f65c457f5-j7lxv_openshift-kube-storage-version-migrator-operator(a1a998af-4fc0-4078-a6a0-93dde6c00508): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:35:47.187261 master-0 kubenswrapper[7271]: E0313 10:35:47.187191 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" podUID="a1a998af-4fc0-4078-a6a0-93dde6c00508" Mar 13 10:35:47.197461 master-0 kubenswrapper[7271]: I0313 10:35:47.197414 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j59zw\" (UniqueName: \"kubernetes.io/projected/95339220-324d-45e7-bdc2-e4f42fbd1d32-kube-api-access-j59zw\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:47.216522 master-0 kubenswrapper[7271]: I0313 10:35:47.216463 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq9dl\" (UniqueName: \"kubernetes.io/projected/b12e76f4-b960-4534-90e6-a2cdbecd1728-kube-api-access-xq9dl\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:35:47.241170 master-0 kubenswrapper[7271]: I0313 10:35:47.241114 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:47.241340 master-0 kubenswrapper[7271]: I0313 10:35:47.241184 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:47.241388 master-0 kubenswrapper[7271]: E0313 10:35:47.241337 7271 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:47.241496 master-0 kubenswrapper[7271]: E0313 10:35:47.241470 7271 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:47.241535 master-0 kubenswrapper[7271]: I0313 10:35:47.241358 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:47.241535 master-0 kubenswrapper[7271]: E0313 10:35:47.241470 7271 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 10:35:47.241632 master-0 kubenswrapper[7271]: E0313 10:35:47.241478 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls podName:7667717b-fb74-456b-8615-16475cb69e98 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:48.241455092 +0000 UTC m=+2.768277482 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls") pod "ingress-operator-677db989d6-tzd9b" (UID: "7667717b-fb74-456b-8615-16475cb69e98") : secret "metrics-tls" not found Mar 13 10:35:47.241632 master-0 kubenswrapper[7271]: E0313 10:35:47.241601 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls podName:4d5479f3-51ec-4b93-8188-21cdda44828d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:48.241570485 +0000 UTC m=+2.768392875 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-vk9qz" (UID: "4d5479f3-51ec-4b93-8188-21cdda44828d") : secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:47.241699 master-0 kubenswrapper[7271]: I0313 10:35:47.241634 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:47.241774 master-0 kubenswrapper[7271]: E0313 10:35:47.241742 7271 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 10:35:47.241824 master-0 kubenswrapper[7271]: E0313 10:35:47.241769 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:48.241706629 +0000 UTC m=+2.768529019 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "node-tuning-operator-tls" not found Mar 13 10:35:47.241885 master-0 kubenswrapper[7271]: I0313 10:35:47.241851 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:47.241922 master-0 kubenswrapper[7271]: E0313 10:35:47.241885 7271 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:47.241922 master-0 kubenswrapper[7271]: E0313 10:35:47.241911 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls podName:3ff2ab1c-7057-4e18-8e32-68807f86532a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:48.241902314 +0000 UTC m=+2.768724704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls") pod "dns-operator-589895fbb7-wjrpm" (UID: "3ff2ab1c-7057-4e18-8e32-68807f86532a") : secret "metrics-tls" not found Mar 13 10:35:47.241976 master-0 kubenswrapper[7271]: I0313 10:35:47.241940 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:47.242017 master-0 kubenswrapper[7271]: I0313 10:35:47.241980 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:47.242017 master-0 kubenswrapper[7271]: E0313 10:35:47.241985 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls podName:8cf9326b-bc23-45c2-82c4-9c08c739ac5a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:48.241976456 +0000 UTC m=+2.768798836 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-492v4" (UID: "8cf9326b-bc23-45c2-82c4-9c08c739ac5a") : secret "image-registry-operator-tls" not found Mar 13 10:35:47.242113 master-0 kubenswrapper[7271]: E0313 10:35:47.242086 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 10:35:47.242149 master-0 kubenswrapper[7271]: E0313 10:35:47.242141 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert podName:c455a959-d764-4b4f-a1e0-95c27495dd9d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:48.24212935 +0000 UTC m=+2.768951740 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert") pod "catalog-operator-7d9c49f57b-2j5jl" (UID: "c455a959-d764-4b4f-a1e0-95c27495dd9d") : secret "catalog-operator-serving-cert" not found Mar 13 10:35:47.242339 master-0 kubenswrapper[7271]: I0313 10:35:47.242189 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:47.242339 master-0 kubenswrapper[7271]: E0313 10:35:47.242259 7271 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:47.242339 master-0 kubenswrapper[7271]: E0313 10:35:47.242305 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:48.242288975 +0000 UTC m=+2.769111355 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:47.242339 master-0 kubenswrapper[7271]: E0313 10:35:47.242331 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 10:35:47.242462 master-0 kubenswrapper[7271]: E0313 10:35:47.242363 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert podName:2afe3890-e844-4dd3-ba49-3ac9178549bf nodeName:}" failed. No retries permitted until 2026-03-13 10:35:48.242354466 +0000 UTC m=+2.769176856 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert") pod "olm-operator-d64cfc9db-rsl2h" (UID: "2afe3890-e844-4dd3-ba49-3ac9178549bf") : secret "olm-operator-serving-cert" not found Mar 13 10:35:47.244518 master-0 kubenswrapper[7271]: I0313 10:35:47.244464 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knkb7\" (UniqueName: \"kubernetes.io/projected/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-kube-api-access-knkb7\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:47.257079 master-0 kubenswrapper[7271]: I0313 10:35:47.257038 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56qz6\" (UniqueName: \"kubernetes.io/projected/79bb87a4-8834-4c73-834e-356ccc1f7f9b-kube-api-access-56qz6\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:47.276604 master-0 kubenswrapper[7271]: I0313 10:35:47.276535 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjvtr\" (UniqueName: \"kubernetes.io/projected/9aa4b44d-f202-4670-afab-44b38960026f-kube-api-access-bjvtr\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:35:47.297614 master-0 kubenswrapper[7271]: I0313 10:35:47.297542 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp6pp\" (UniqueName: \"kubernetes.io/projected/8a305f45-8689-45a8-8c8b-5954f2c863df-kube-api-access-zp6pp\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:47.315828 master-0 kubenswrapper[7271]: I0313 10:35:47.315789 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aaf36b4-e556-4723-a624-aa2edc69c301-kube-api-access\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:47.337920 master-0 kubenswrapper[7271]: I0313 10:35:47.337887 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk4sg\" (UniqueName: \"kubernetes.io/projected/f87662b9-6ac6-44f3-8a16-ff858c2baa91-kube-api-access-zk4sg\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:35:47.343631 master-0 kubenswrapper[7271]: I0313 10:35:47.343546 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:47.343728 master-0 kubenswrapper[7271]: I0313 10:35:47.343645 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:47.343728 master-0 kubenswrapper[7271]: I0313 10:35:47.343669 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:47.344117 master-0 kubenswrapper[7271]: E0313 10:35:47.343903 7271 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 10:35:47.344117 master-0 kubenswrapper[7271]: I0313 10:35:47.343966 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:47.344117 master-0 kubenswrapper[7271]: E0313 10:35:47.344038 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics podName:66f49a19-0e3b-4611-b8a6-5f5687fa20b6 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:48.344008624 +0000 UTC m=+2.870831224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-85x6d" (UID: "66f49a19-0e3b-4611-b8a6-5f5687fa20b6") : secret "marketplace-operator-metrics" not found Mar 13 10:35:47.344117 master-0 kubenswrapper[7271]: E0313 10:35:47.344051 7271 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:35:47.344117 master-0 kubenswrapper[7271]: E0313 10:35:47.344088 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 10:35:47.344292 master-0 kubenswrapper[7271]: E0313 10:35:47.344139 7271 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 10:35:47.344292 master-0 kubenswrapper[7271]: E0313 10:35:47.344095 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:48.344077716 +0000 UTC m=+2.870900306 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:35:47.344292 master-0 kubenswrapper[7271]: E0313 10:35:47.344197 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert podName:8a305f45-8689-45a8-8c8b-5954f2c863df nodeName:}" failed. No retries permitted until 2026-03-13 10:35:48.344176919 +0000 UTC m=+2.870999509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-d5b45" (UID: "8a305f45-8689-45a8-8c8b-5954f2c863df") : secret "package-server-manager-serving-cert" not found Mar 13 10:35:47.344292 master-0 kubenswrapper[7271]: I0313 10:35:47.344244 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:47.344417 master-0 kubenswrapper[7271]: E0313 10:35:47.344321 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:35:48.344304632 +0000 UTC m=+2.871127022 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : secret "metrics-daemon-secret" not found Mar 13 10:35:47.344417 master-0 kubenswrapper[7271]: E0313 10:35:47.344397 7271 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 10:35:47.344475 master-0 kubenswrapper[7271]: E0313 10:35:47.344446 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs podName:95339220-324d-45e7-bdc2-e4f42fbd1d32 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:48.344432956 +0000 UTC m=+2.871255346 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs") pod "multus-admission-controller-8d675b596-d787l" (UID: "95339220-324d-45e7-bdc2-e4f42fbd1d32") : secret "multus-admission-controller-secret" not found Mar 13 10:35:47.362198 master-0 kubenswrapper[7271]: I0313 10:35:47.362049 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxvqn\" (UniqueName: \"kubernetes.io/projected/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-kube-api-access-vxvqn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:47.380662 master-0 kubenswrapper[7271]: I0313 10:35:47.380608 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg69z\" (UniqueName: \"kubernetes.io/projected/1c12a5d5-711f-4663-974c-c4b06e15fc39-kube-api-access-cg69z\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:35:47.396637 master-0 kubenswrapper[7271]: I0313 10:35:47.396577 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6nnz\" (UniqueName: \"kubernetes.io/projected/5843b0d4-a538-4261-b425-598e318c9d07-kube-api-access-r6nnz\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:35:47.411425 master-0 kubenswrapper[7271]: E0313 10:35:47.411169 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:35:47.433924 master-0 kubenswrapper[7271]: W0313 10:35:47.433859 7271 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 13 10:35:47.434040 master-0 kubenswrapper[7271]: E0313 10:35:47.433989 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:35:47.452115 master-0 kubenswrapper[7271]: E0313 10:35:47.451764 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:47.468514 master-0 kubenswrapper[7271]: I0313 10:35:47.468316 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:47.472050 master-0 kubenswrapper[7271]: E0313 10:35:47.472028 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:35:47.473186 master-0 kubenswrapper[7271]: I0313 10:35:47.473161 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:47.490807 master-0 kubenswrapper[7271]: E0313 10:35:47.490770 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:47.516873 master-0 kubenswrapper[7271]: I0313 10:35:47.516531 7271 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 13 10:35:47.522357 master-0 kubenswrapper[7271]: I0313 10:35:47.522310 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gchrx\" (UniqueName: \"kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx\") pod \"network-check-target-96vwf\" (UID: \"803de28e-3b31-4ea2-9b97-87a733635a5c\") " pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:47.810630 master-0 kubenswrapper[7271]: I0313 10:35:47.810026 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:48.114436 master-0 kubenswrapper[7271]: E0313 10:35:48.114294 7271 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460" Mar 13 10:35:48.114676 master-0 kubenswrapper[7271]: E0313 10:35:48.114488 7271 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xq9dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-gdjjd_openshift-network-operator(b12e76f4-b960-4534-90e6-a2cdbecd1728): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:35:48.115795 master-0 kubenswrapper[7271]: E0313 10:35:48.115737 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-gdjjd" podUID="b12e76f4-b960-4534-90e6-a2cdbecd1728" Mar 13 10:35:48.253714 master-0 kubenswrapper[7271]: I0313 10:35:48.253534 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:48.253714 master-0 kubenswrapper[7271]: I0313 10:35:48.253597 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:48.253714 master-0 kubenswrapper[7271]: I0313 10:35:48.253624 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:48.253714 master-0 kubenswrapper[7271]: I0313 10:35:48.253646 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:48.253714 master-0 kubenswrapper[7271]: I0313 10:35:48.253693 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:48.253714 master-0 kubenswrapper[7271]: I0313 10:35:48.253717 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:48.254127 master-0 kubenswrapper[7271]: I0313 10:35:48.253868 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:48.254127 master-0 kubenswrapper[7271]: I0313 10:35:48.253918 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:48.254127 master-0 kubenswrapper[7271]: E0313 10:35:48.253959 7271 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 10:35:48.254127 master-0 kubenswrapper[7271]: E0313 10:35:48.254013 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 10:35:48.254127 master-0 kubenswrapper[7271]: E0313 10:35:48.254062 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert podName:2afe3890-e844-4dd3-ba49-3ac9178549bf nodeName:}" failed. No retries permitted until 2026-03-13 10:35:50.254047636 +0000 UTC m=+4.780870016 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert") pod "olm-operator-d64cfc9db-rsl2h" (UID: "2afe3890-e844-4dd3-ba49-3ac9178549bf") : secret "olm-operator-serving-cert" not found Mar 13 10:35:48.254127 master-0 kubenswrapper[7271]: E0313 10:35:48.254064 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 10:35:48.254127 master-0 kubenswrapper[7271]: E0313 10:35:48.254077 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls podName:8cf9326b-bc23-45c2-82c4-9c08c739ac5a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:50.254069966 +0000 UTC m=+4.780892356 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-492v4" (UID: "8cf9326b-bc23-45c2-82c4-9c08c739ac5a") : secret "image-registry-operator-tls" not found Mar 13 10:35:48.254127 master-0 kubenswrapper[7271]: E0313 10:35:48.254109 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert podName:c455a959-d764-4b4f-a1e0-95c27495dd9d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:50.254096947 +0000 UTC m=+4.780919337 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert") pod "catalog-operator-7d9c49f57b-2j5jl" (UID: "c455a959-d764-4b4f-a1e0-95c27495dd9d") : secret "catalog-operator-serving-cert" not found Mar 13 10:35:48.254127 master-0 kubenswrapper[7271]: E0313 10:35:48.254115 7271 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:48.254127 master-0 kubenswrapper[7271]: E0313 10:35:48.254133 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:50.254127188 +0000 UTC m=+4.780949578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:48.254127 master-0 kubenswrapper[7271]: E0313 10:35:48.253984 7271 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:48.254432 master-0 kubenswrapper[7271]: E0313 10:35:48.254155 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls podName:4d5479f3-51ec-4b93-8188-21cdda44828d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:50.254149679 +0000 UTC m=+4.780972069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-vk9qz" (UID: "4d5479f3-51ec-4b93-8188-21cdda44828d") : secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:48.254432 master-0 kubenswrapper[7271]: E0313 10:35:48.254192 7271 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:48.254432 master-0 kubenswrapper[7271]: E0313 10:35:48.254209 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls podName:3ff2ab1c-7057-4e18-8e32-68807f86532a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:50.25420363 +0000 UTC m=+4.781026020 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls") pod "dns-operator-589895fbb7-wjrpm" (UID: "3ff2ab1c-7057-4e18-8e32-68807f86532a") : secret "metrics-tls" not found Mar 13 10:35:48.254432 master-0 kubenswrapper[7271]: E0313 10:35:48.254234 7271 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 10:35:48.254432 master-0 kubenswrapper[7271]: E0313 10:35:48.254253 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:50.254247671 +0000 UTC m=+4.781070061 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "node-tuning-operator-tls" not found Mar 13 10:35:48.254432 master-0 kubenswrapper[7271]: E0313 10:35:48.254298 7271 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:48.254432 master-0 kubenswrapper[7271]: E0313 10:35:48.254319 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls podName:7667717b-fb74-456b-8615-16475cb69e98 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:50.254311933 +0000 UTC m=+4.781134323 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls") pod "ingress-operator-677db989d6-tzd9b" (UID: "7667717b-fb74-456b-8615-16475cb69e98") : secret "metrics-tls" not found Mar 13 10:35:48.355267 master-0 kubenswrapper[7271]: I0313 10:35:48.355093 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:48.355746 master-0 kubenswrapper[7271]: E0313 10:35:48.355339 7271 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 10:35:48.355746 master-0 kubenswrapper[7271]: E0313 10:35:48.355456 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics podName:66f49a19-0e3b-4611-b8a6-5f5687fa20b6 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:50.355431417 +0000 UTC m=+4.882253807 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-85x6d" (UID: "66f49a19-0e3b-4611-b8a6-5f5687fa20b6") : secret "marketplace-operator-metrics" not found Mar 13 10:35:48.355886 master-0 kubenswrapper[7271]: I0313 10:35:48.355786 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:48.355886 master-0 kubenswrapper[7271]: I0313 10:35:48.355834 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:48.355979 master-0 kubenswrapper[7271]: I0313 10:35:48.355895 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:48.355979 master-0 kubenswrapper[7271]: I0313 10:35:48.355928 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:48.355979 master-0 kubenswrapper[7271]: E0313 10:35:48.355938 7271 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 10:35:48.355979 master-0 kubenswrapper[7271]: E0313 10:35:48.355972 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:35:50.355962891 +0000 UTC m=+4.882785281 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : secret "metrics-daemon-secret" not found Mar 13 10:35:48.356167 master-0 kubenswrapper[7271]: E0313 10:35:48.356105 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 10:35:48.357311 master-0 kubenswrapper[7271]: E0313 10:35:48.356171 7271 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:35:48.357311 master-0 kubenswrapper[7271]: E0313 10:35:48.356220 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert podName:8a305f45-8689-45a8-8c8b-5954f2c863df nodeName:}" failed. No retries permitted until 2026-03-13 10:35:50.356194517 +0000 UTC m=+4.883016907 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-d5b45" (UID: "8a305f45-8689-45a8-8c8b-5954f2c863df") : secret "package-server-manager-serving-cert" not found Mar 13 10:35:48.357311 master-0 kubenswrapper[7271]: E0313 10:35:48.357226 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:50.357213384 +0000 UTC m=+4.884035974 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:35:48.357311 master-0 kubenswrapper[7271]: E0313 10:35:48.356288 7271 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 10:35:48.357311 master-0 kubenswrapper[7271]: E0313 10:35:48.357274 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs podName:95339220-324d-45e7-bdc2-e4f42fbd1d32 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:50.357262696 +0000 UTC m=+4.884085286 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs") pod "multus-admission-controller-8d675b596-d787l" (UID: "95339220-324d-45e7-bdc2-e4f42fbd1d32") : secret "multus-admission-controller-secret" not found Mar 13 10:35:48.540852 master-0 kubenswrapper[7271]: E0313 10:35:48.540639 7271 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3680315987/1\": happened during read: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba" Mar 13 10:35:48.541116 master-0 kubenswrapper[7271]: E0313 10:35:48.540880 7271 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hp2qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-69b6fc6b88-lntzv_openshift-service-ca-operator(37b2e803-302b-4650-b18f-d3d2dd703bd5): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3680315987/1\": happened during read: context canceled" logger="UnhandledError" Mar 13 10:35:48.541499 master-0 kubenswrapper[7271]: E0313 10:35:48.541453 7271 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783" Mar 13 10:35:48.541630 master-0 kubenswrapper[7271]: E0313 10:35:48.541551 7271 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:copy-catalogd-manifests,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783,Command:[/bin/sh],Args:[-c cp -a /openshift/manifests /operand-assets/catalogd],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:operand-assets,ReadOnly:false,MountPath:/operand-assets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5rht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000360000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-olm-operator-77899cf6d-kh9h2_openshift-cluster-olm-operator(b8d40b37-0f3d-4531-9fa8-eda965d2337d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:35:48.542793 master-0 kubenswrapper[7271]: E0313 10:35:48.542744 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"copy-catalogd-manifests\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" podUID="b8d40b37-0f3d-4531-9fa8-eda965d2337d" Mar 13 10:35:48.542850 master-0 kubenswrapper[7271]: E0313 10:35:48.542817 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage3680315987/1\\\": happened during read: context canceled\"" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" podUID="37b2e803-302b-4650-b18f-d3d2dd703bd5" Mar 13 10:35:48.741118 master-0 kubenswrapper[7271]: I0313 10:35:48.741058 7271 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:35:49.064848 master-0 kubenswrapper[7271]: E0313 10:35:49.064774 7271 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" Mar 13 10:35:49.065496 master-0 kubenswrapper[7271]: E0313 10:35:49.065005 7271 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56,Command:[cluster-kube-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56,ValueFrom:nil,},EnvVar{Name:CLUSTER_POLICY_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35768a0c3eb24134dd38633e8acfc7db69ee96b2fd660e9bba3b8c996452fef7,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.31.14,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-operator-86d7cdfdfb-px9bl_openshift-kube-controller-manager-operator(ec3168fc-6c8f-4603-94e0-17b1ae22a802): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:35:49.066216 master-0 kubenswrapper[7271]: E0313 10:35:49.066178 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" podUID="ec3168fc-6c8f-4603-94e0-17b1ae22a802" Mar 13 10:35:49.142925 master-0 kubenswrapper[7271]: I0313 10:35:49.141800 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:49.146864 master-0 kubenswrapper[7271]: I0313 10:35:49.146820 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:49.338163 master-0 kubenswrapper[7271]: E0313 10:35:49.337955 7271 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43" Mar 13 10:35:49.338426 master-0 kubenswrapper[7271]: E0313 10:35:49.338185 7271 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:openshift-api,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43,Command:[write-available-featuresets --asset-output-dir=/available-featuregates --payload-version=$(OPERATOR_IMAGE_VERSION)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:available-featuregates,ReadOnly:false,MountPath:/available-featuregates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjk5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-config-operator-64488f9d78-mvfgh_openshift-config-operator(6ed47c57-533f-43e4-88eb-07da29b4878f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:35:49.339616 master-0 kubenswrapper[7271]: E0313 10:35:49.339536 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-api\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" Mar 13 10:35:49.409319 master-0 kubenswrapper[7271]: I0313 10:35:49.408984 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:49.440601 master-0 kubenswrapper[7271]: I0313 10:35:49.440481 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:49.743998 master-0 kubenswrapper[7271]: I0313 10:35:49.743826 7271 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:35:49.743998 master-0 kubenswrapper[7271]: I0313 10:35:49.743848 7271 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:35:49.743998 master-0 kubenswrapper[7271]: I0313 10:35:49.743862 7271 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:35:49.937532 master-0 kubenswrapper[7271]: E0313 10:35:49.937436 7271 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3" Mar 13 10:35:49.937764 master-0 kubenswrapper[7271]: E0313 10:35:49.937688 7271 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:csi-snapshot-controller-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3,Command:[],Args:[start -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERAND_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1,ValueFrom:nil,},EnvVar{Name:WEBHOOK_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5e9989ee0577e930adcd97085176343a881bf92537dda1bf0325a3b1faf96d6,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnrlx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000150000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-snapshot-controller-operator-5685fbc7d-mfvmx_openshift-cluster-storage-operator(866cf034-8fd8-4f16-8e9b-68627228aa8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:35:49.940185 master-0 kubenswrapper[7271]: E0313 10:35:49.940107 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx" podUID="866cf034-8fd8-4f16-8e9b-68627228aa8d" Mar 13 10:35:50.281806 master-0 kubenswrapper[7271]: I0313 10:35:50.281744 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: I0313 10:35:50.281821 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: I0313 10:35:50.281893 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: I0313 10:35:50.281920 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.281935 7271 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: I0313 10:35:50.281957 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: I0313 10:35:50.281989 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282009 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:54.281992376 +0000 UTC m=+8.808814766 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "node-tuning-operator-tls" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: I0313 10:35:50.282030 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282057 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282084 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert podName:2afe3890-e844-4dd3-ba49-3ac9178549bf nodeName:}" failed. No retries permitted until 2026-03-13 10:35:54.282076088 +0000 UTC m=+8.808898478 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert") pod "olm-operator-d64cfc9db-rsl2h" (UID: "2afe3890-e844-4dd3-ba49-3ac9178549bf") : secret "olm-operator-serving-cert" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: I0313 10:35:50.282055 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282095 7271 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282115 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls podName:4d5479f3-51ec-4b93-8188-21cdda44828d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:54.282108939 +0000 UTC m=+8.808931329 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-vk9qz" (UID: "4d5479f3-51ec-4b93-8188-21cdda44828d") : secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282139 7271 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282158 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls podName:8cf9326b-bc23-45c2-82c4-9c08c739ac5a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:54.28215243 +0000 UTC m=+8.808974820 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-492v4" (UID: "8cf9326b-bc23-45c2-82c4-9c08c739ac5a") : secret "image-registry-operator-tls" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282190 7271 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282208 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls podName:3ff2ab1c-7057-4e18-8e32-68807f86532a nodeName:}" failed. No retries permitted until 2026-03-13 10:35:54.282200991 +0000 UTC m=+8.809023381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls") pod "dns-operator-589895fbb7-wjrpm" (UID: "3ff2ab1c-7057-4e18-8e32-68807f86532a") : secret "metrics-tls" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282238 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282255 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert podName:c455a959-d764-4b4f-a1e0-95c27495dd9d nodeName:}" failed. No retries permitted until 2026-03-13 10:35:54.282249663 +0000 UTC m=+8.809072053 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert") pod "catalog-operator-7d9c49f57b-2j5jl" (UID: "c455a959-d764-4b4f-a1e0-95c27495dd9d") : secret "catalog-operator-serving-cert" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282283 7271 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282298 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:54.282293644 +0000 UTC m=+8.809116034 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282330 7271 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:50.282356 master-0 kubenswrapper[7271]: E0313 10:35:50.282347 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls podName:7667717b-fb74-456b-8615-16475cb69e98 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:54.282341065 +0000 UTC m=+8.809163455 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls") pod "ingress-operator-677db989d6-tzd9b" (UID: "7667717b-fb74-456b-8615-16475cb69e98") : secret "metrics-tls" not found Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: I0313 10:35:50.383135 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: I0313 10:35:50.383219 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: I0313 10:35:50.383251 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: I0313 10:35:50.383458 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: I0313 10:35:50.383497 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: E0313 10:35:50.383698 7271 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: E0313 10:35:50.383758 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: E0313 10:35:50.383811 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:35:54.383784818 +0000 UTC m=+8.910607348 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : secret "metrics-daemon-secret" not found Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: E0313 10:35:50.383832 7271 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: E0313 10:35:50.383837 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert podName:8a305f45-8689-45a8-8c8b-5954f2c863df nodeName:}" failed. No retries permitted until 2026-03-13 10:35:54.383824469 +0000 UTC m=+8.910647049 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-d5b45" (UID: "8a305f45-8689-45a8-8c8b-5954f2c863df") : secret "package-server-manager-serving-cert" not found Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: E0313 10:35:50.383867 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:54.383848379 +0000 UTC m=+8.910670769 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: E0313 10:35:50.383714 7271 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: E0313 10:35:50.383892 7271 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: E0313 10:35:50.383902 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs podName:95339220-324d-45e7-bdc2-e4f42fbd1d32 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:54.383893881 +0000 UTC m=+8.910716271 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs") pod "multus-admission-controller-8d675b596-d787l" (UID: "95339220-324d-45e7-bdc2-e4f42fbd1d32") : secret "multus-admission-controller-secret" not found Mar 13 10:35:50.386665 master-0 kubenswrapper[7271]: E0313 10:35:50.383933 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics podName:66f49a19-0e3b-4611-b8a6-5f5687fa20b6 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:54.383919421 +0000 UTC m=+8.910742011 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-85x6d" (UID: "66f49a19-0e3b-4611-b8a6-5f5687fa20b6") : secret "marketplace-operator-metrics" not found Mar 13 10:35:50.392459 master-0 kubenswrapper[7271]: E0313 10:35:50.392401 7271 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab" Mar 13 10:35:50.392763 master-0 kubenswrapper[7271]: E0313 10:35:50.392635 7271 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-apiserver-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab,Command:[cluster-openshift-apiserver-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:KUBE_APISERVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-22bwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-apiserver-operator-799b6db4d7-sdg4w_openshift-apiserver-operator(5ed5e77b-948b-4d94-ac9f-440ee3c07e18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 10:35:50.394037 master-0 kubenswrapper[7271]: E0313 10:35:50.393960 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" podUID="5ed5e77b-948b-4d94-ac9f-440ee3c07e18" Mar 13 10:35:50.639388 master-0 kubenswrapper[7271]: I0313 10:35:50.629782 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-96vwf"] Mar 13 10:35:50.750301 master-0 kubenswrapper[7271]: I0313 10:35:50.750218 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" event={"ID":"1434c4a2-5c4d-478a-a16a-7d6a52ea3099","Type":"ContainerStarted","Data":"07efb32e685572e6b4d6844e3569402a8bdfbf11ae0829c85acd5de7788ca4d9"} Mar 13 10:35:50.750301 master-0 kubenswrapper[7271]: I0313 10:35:50.750264 7271 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:35:50.837807 master-0 kubenswrapper[7271]: W0313 10:35:50.832436 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod803de28e_3b31_4ea2_9b97_87a733635a5c.slice/crio-9a02e284386b73dcacdc66689703a6ce2a89d3ae22d94162ffdd3488c53d3335 WatchSource:0}: Error finding container 9a02e284386b73dcacdc66689703a6ce2a89d3ae22d94162ffdd3488c53d3335: Status 404 returned error can't find the container with id 9a02e284386b73dcacdc66689703a6ce2a89d3ae22d94162ffdd3488c53d3335 Mar 13 10:35:50.991676 master-0 kubenswrapper[7271]: I0313 10:35:50.991007 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:50.991676 master-0 kubenswrapper[7271]: I0313 10:35:50.991564 7271 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:35:50.991676 master-0 kubenswrapper[7271]: I0313 10:35:50.991574 7271 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:35:51.043653 master-0 kubenswrapper[7271]: I0313 10:35:51.043438 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:51.106778 master-0 kubenswrapper[7271]: I0313 10:35:51.106223 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:51.171916 master-0 kubenswrapper[7271]: I0313 10:35:51.171701 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:35:51.755770 master-0 kubenswrapper[7271]: I0313 10:35:51.755691 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" event={"ID":"574bf255-14b3-40af-b240-2d3abd5b86b8","Type":"ContainerStarted","Data":"a384e9c9352558c7493eb0f31fbfe7c7667c323e9cd28c07e6b3e552b94e372f"} Mar 13 10:35:51.758149 master-0 kubenswrapper[7271]: I0313 10:35:51.758098 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" event={"ID":"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303","Type":"ContainerStarted","Data":"5c959a07b9cea59f8d22bac12b5ad0b337201cde45ef40482caaae6f05ee2a56"} Mar 13 10:35:51.759694 master-0 kubenswrapper[7271]: I0313 10:35:51.759653 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-96vwf" event={"ID":"803de28e-3b31-4ea2-9b97-87a733635a5c","Type":"ContainerStarted","Data":"dd24da11996b28f0e77f0c690e60ade02609ebea2f47499fae52bd9a757e14a1"} Mar 13 10:35:51.759694 master-0 kubenswrapper[7271]: I0313 10:35:51.759688 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-96vwf" event={"ID":"803de28e-3b31-4ea2-9b97-87a733635a5c","Type":"ContainerStarted","Data":"9a02e284386b73dcacdc66689703a6ce2a89d3ae22d94162ffdd3488c53d3335"} Mar 13 10:35:51.761914 master-0 kubenswrapper[7271]: I0313 10:35:51.761858 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" event={"ID":"8f9db15a-8854-485b-9863-9cbe5dddd977","Type":"ContainerStarted","Data":"30ed7322c0091d1c760c898b8eeff7c2a46e380aac09f0741b2738a7131c9763"} Mar 13 10:35:51.762323 master-0 kubenswrapper[7271]: I0313 10:35:51.762284 7271 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:35:52.765429 master-0 kubenswrapper[7271]: I0313 10:35:52.765383 7271 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:35:53.795545 master-0 kubenswrapper[7271]: I0313 10:35:53.795463 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:53.796196 master-0 kubenswrapper[7271]: I0313 10:35:53.795665 7271 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:35:53.799153 master-0 kubenswrapper[7271]: I0313 10:35:53.799098 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:35:54.335302 master-0 kubenswrapper[7271]: I0313 10:35:54.335229 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:35:54.335302 master-0 kubenswrapper[7271]: I0313 10:35:54.335283 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:35:54.335722 master-0 kubenswrapper[7271]: E0313 10:35:54.335474 7271 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:54.335722 master-0 kubenswrapper[7271]: E0313 10:35:54.335636 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls podName:3ff2ab1c-7057-4e18-8e32-68807f86532a nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.335570765 +0000 UTC m=+16.862393155 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls") pod "dns-operator-589895fbb7-wjrpm" (UID: "3ff2ab1c-7057-4e18-8e32-68807f86532a") : secret "metrics-tls" not found Mar 13 10:35:54.336489 master-0 kubenswrapper[7271]: I0313 10:35:54.336095 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:54.336489 master-0 kubenswrapper[7271]: I0313 10:35:54.336222 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:35:54.336489 master-0 kubenswrapper[7271]: E0313 10:35:54.336134 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 10:35:54.336489 master-0 kubenswrapper[7271]: I0313 10:35:54.336308 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:35:54.336489 master-0 kubenswrapper[7271]: E0313 10:35:54.336335 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert podName:c455a959-d764-4b4f-a1e0-95c27495dd9d nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.336310565 +0000 UTC m=+16.863133155 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert") pod "catalog-operator-7d9c49f57b-2j5jl" (UID: "c455a959-d764-4b4f-a1e0-95c27495dd9d") : secret "catalog-operator-serving-cert" not found Mar 13 10:35:54.336489 master-0 kubenswrapper[7271]: E0313 10:35:54.336181 7271 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:54.336489 master-0 kubenswrapper[7271]: I0313 10:35:54.336368 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:35:54.336489 master-0 kubenswrapper[7271]: E0313 10:35:54.336382 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 10:35:54.336489 master-0 kubenswrapper[7271]: E0313 10:35:54.336392 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.336379137 +0000 UTC m=+16.863201527 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "performance-addon-operator-webhook-cert" not found Mar 13 10:35:54.336489 master-0 kubenswrapper[7271]: I0313 10:35:54.336455 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:35:54.336489 master-0 kubenswrapper[7271]: I0313 10:35:54.336497 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:35:54.337166 master-0 kubenswrapper[7271]: E0313 10:35:54.336525 7271 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 10:35:54.337166 master-0 kubenswrapper[7271]: E0313 10:35:54.336534 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert podName:2afe3890-e844-4dd3-ba49-3ac9178549bf nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.336521211 +0000 UTC m=+16.863343601 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert") pod "olm-operator-d64cfc9db-rsl2h" (UID: "2afe3890-e844-4dd3-ba49-3ac9178549bf") : secret "olm-operator-serving-cert" not found Mar 13 10:35:54.337166 master-0 kubenswrapper[7271]: E0313 10:35:54.336554 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.336546682 +0000 UTC m=+16.863369072 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "node-tuning-operator-tls" not found Mar 13 10:35:54.337166 master-0 kubenswrapper[7271]: E0313 10:35:54.336609 7271 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:54.337166 master-0 kubenswrapper[7271]: E0313 10:35:54.336611 7271 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 10:35:54.337166 master-0 kubenswrapper[7271]: E0313 10:35:54.336639 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls podName:4d5479f3-51ec-4b93-8188-21cdda44828d nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.336629804 +0000 UTC m=+16.863452194 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-vk9qz" (UID: "4d5479f3-51ec-4b93-8188-21cdda44828d") : secret "cluster-monitoring-operator-tls" not found Mar 13 10:35:54.337166 master-0 kubenswrapper[7271]: E0313 10:35:54.336656 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls podName:8cf9326b-bc23-45c2-82c4-9c08c739ac5a nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.336645504 +0000 UTC m=+16.863467894 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-492v4" (UID: "8cf9326b-bc23-45c2-82c4-9c08c739ac5a") : secret "image-registry-operator-tls" not found Mar 13 10:35:54.337166 master-0 kubenswrapper[7271]: E0313 10:35:54.336728 7271 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:35:54.337166 master-0 kubenswrapper[7271]: E0313 10:35:54.336907 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls podName:7667717b-fb74-456b-8615-16475cb69e98 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.3368596 +0000 UTC m=+16.863682130 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls") pod "ingress-operator-677db989d6-tzd9b" (UID: "7667717b-fb74-456b-8615-16475cb69e98") : secret "metrics-tls" not found Mar 13 10:35:54.438164 master-0 kubenswrapper[7271]: I0313 10:35:54.438033 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:35:54.438164 master-0 kubenswrapper[7271]: I0313 10:35:54.438139 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:35:54.438164 master-0 kubenswrapper[7271]: I0313 10:35:54.438171 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:35:54.438651 master-0 kubenswrapper[7271]: E0313 10:35:54.438381 7271 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 10:35:54.438651 master-0 kubenswrapper[7271]: I0313 10:35:54.438424 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:35:54.438651 master-0 kubenswrapper[7271]: I0313 10:35:54.438462 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:35:54.438651 master-0 kubenswrapper[7271]: E0313 10:35:54.438495 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.438463577 +0000 UTC m=+16.965286157 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : secret "metrics-daemon-secret" not found Mar 13 10:35:54.438651 master-0 kubenswrapper[7271]: E0313 10:35:54.438604 7271 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 10:35:54.438651 master-0 kubenswrapper[7271]: E0313 10:35:54.438649 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics podName:66f49a19-0e3b-4611-b8a6-5f5687fa20b6 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.438637051 +0000 UTC m=+16.965459641 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-85x6d" (UID: "66f49a19-0e3b-4611-b8a6-5f5687fa20b6") : secret "marketplace-operator-metrics" not found Mar 13 10:35:54.438651 master-0 kubenswrapper[7271]: E0313 10:35:54.438655 7271 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:35:54.439022 master-0 kubenswrapper[7271]: E0313 10:35:54.438704 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 10:35:54.439022 master-0 kubenswrapper[7271]: E0313 10:35:54.438748 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.438722784 +0000 UTC m=+16.965545344 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:35:54.439022 master-0 kubenswrapper[7271]: E0313 10:35:54.438774 7271 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 10:35:54.439022 master-0 kubenswrapper[7271]: E0313 10:35:54.438774 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert podName:8a305f45-8689-45a8-8c8b-5954f2c863df nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.438762715 +0000 UTC m=+16.965585115 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-d5b45" (UID: "8a305f45-8689-45a8-8c8b-5954f2c863df") : secret "package-server-manager-serving-cert" not found Mar 13 10:35:54.439022 master-0 kubenswrapper[7271]: E0313 10:35:54.438808 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs podName:95339220-324d-45e7-bdc2-e4f42fbd1d32 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.438796456 +0000 UTC m=+16.965619056 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs") pod "multus-admission-controller-8d675b596-d787l" (UID: "95339220-324d-45e7-bdc2-e4f42fbd1d32") : secret "multus-admission-controller-secret" not found Mar 13 10:35:56.073666 master-0 kubenswrapper[7271]: I0313 10:35:56.073211 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw"] Mar 13 10:35:56.074475 master-0 kubenswrapper[7271]: E0313 10:35:56.073878 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d917075d-bc69-49b3-acab-c4d496dd04fc" containerName="prober" Mar 13 10:35:56.074475 master-0 kubenswrapper[7271]: I0313 10:35:56.073898 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="d917075d-bc69-49b3-acab-c4d496dd04fc" containerName="prober" Mar 13 10:35:56.074475 master-0 kubenswrapper[7271]: E0313 10:35:56.073911 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8337424-8677-401d-8c68-b58c7d9ab99a" containerName="assisted-installer-controller" Mar 13 10:35:56.074475 master-0 kubenswrapper[7271]: I0313 10:35:56.073920 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8337424-8677-401d-8c68-b58c7d9ab99a" containerName="assisted-installer-controller" Mar 13 10:35:56.074475 master-0 kubenswrapper[7271]: I0313 10:35:56.074019 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="d917075d-bc69-49b3-acab-c4d496dd04fc" containerName="prober" Mar 13 10:35:56.074475 master-0 kubenswrapper[7271]: I0313 10:35:56.074031 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8337424-8677-401d-8c68-b58c7d9ab99a" containerName="assisted-installer-controller" Mar 13 10:35:56.074475 master-0 kubenswrapper[7271]: I0313 10:35:56.074449 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:56.076313 master-0 kubenswrapper[7271]: I0313 10:35:56.076268 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 10:35:56.076811 master-0 kubenswrapper[7271]: I0313 10:35:56.076781 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 10:35:56.077018 master-0 kubenswrapper[7271]: I0313 10:35:56.076988 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 10:35:56.077071 master-0 kubenswrapper[7271]: I0313 10:35:56.077049 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 10:35:56.077909 master-0 kubenswrapper[7271]: I0313 10:35:56.077817 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 10:35:56.160642 master-0 kubenswrapper[7271]: I0313 10:35:56.160453 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:56.160642 master-0 kubenswrapper[7271]: I0313 10:35:56.160550 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-config\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:56.161213 master-0 kubenswrapper[7271]: I0313 10:35:56.160996 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:56.161213 master-0 kubenswrapper[7271]: I0313 10:35:56.161078 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7hrd\" (UniqueName: \"kubernetes.io/projected/8d60570a-069b-43fe-be3e-814955fec7ce-kube-api-access-d7hrd\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:56.166615 master-0 kubenswrapper[7271]: I0313 10:35:56.166533 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw"] Mar 13 10:35:56.167787 master-0 kubenswrapper[7271]: I0313 10:35:56.167749 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67f68cdb6-lbnl6"] Mar 13 10:35:56.168499 master-0 kubenswrapper[7271]: I0313 10:35:56.168478 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.171144 master-0 kubenswrapper[7271]: I0313 10:35:56.171114 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 10:35:56.171495 master-0 kubenswrapper[7271]: I0313 10:35:56.171178 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 10:35:56.171632 master-0 kubenswrapper[7271]: I0313 10:35:56.171497 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 10:35:56.171692 master-0 kubenswrapper[7271]: I0313 10:35:56.171332 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 10:35:56.171731 master-0 kubenswrapper[7271]: I0313 10:35:56.171342 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 10:35:56.179130 master-0 kubenswrapper[7271]: I0313 10:35:56.179091 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 10:35:56.259869 master-0 kubenswrapper[7271]: I0313 10:35:56.259811 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:35:56.261987 master-0 kubenswrapper[7271]: I0313 10:35:56.261963 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-config\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:56.267987 master-0 kubenswrapper[7271]: I0313 10:35:56.267940 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-proxy-ca-bundles\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.268357 master-0 kubenswrapper[7271]: I0313 10:35:56.268314 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-serving-cert\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.268982 master-0 kubenswrapper[7271]: I0313 10:35:56.268968 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:56.269095 master-0 kubenswrapper[7271]: I0313 10:35:56.269082 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7hrd\" (UniqueName: \"kubernetes.io/projected/8d60570a-069b-43fe-be3e-814955fec7ce-kube-api-access-d7hrd\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:56.269198 master-0 kubenswrapper[7271]: I0313 10:35:56.269186 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-config\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.269275 master-0 kubenswrapper[7271]: I0313 10:35:56.269260 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-client-ca\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.269352 master-0 kubenswrapper[7271]: I0313 10:35:56.269340 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwh7h\" (UniqueName: \"kubernetes.io/projected/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-kube-api-access-dwh7h\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.269459 master-0 kubenswrapper[7271]: I0313 10:35:56.269419 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:56.269658 master-0 kubenswrapper[7271]: E0313 10:35:56.269641 7271 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:35:56.269798 master-0 kubenswrapper[7271]: E0313 10:35:56.269785 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert podName:8d60570a-069b-43fe-be3e-814955fec7ce nodeName:}" failed. No retries permitted until 2026-03-13 10:35:56.769767961 +0000 UTC m=+11.296590351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert") pod "route-controller-manager-fc5589ff-d48hw" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce") : secret "serving-cert" not found Mar 13 10:35:56.270051 master-0 kubenswrapper[7271]: I0313 10:35:56.268875 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-config\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:56.270123 master-0 kubenswrapper[7271]: E0313 10:35:56.269954 7271 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:35:56.271478 master-0 kubenswrapper[7271]: E0313 10:35:56.271463 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca podName:8d60570a-069b-43fe-be3e-814955fec7ce nodeName:}" failed. No retries permitted until 2026-03-13 10:35:56.771413955 +0000 UTC m=+11.298236345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca") pod "route-controller-manager-fc5589ff-d48hw" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce") : configmap "client-ca" not found Mar 13 10:35:56.371045 master-0 kubenswrapper[7271]: I0313 10:35:56.370889 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-proxy-ca-bundles\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.371364 master-0 kubenswrapper[7271]: I0313 10:35:56.371344 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-serving-cert\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.371613 master-0 kubenswrapper[7271]: I0313 10:35:56.371572 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-config\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.371776 master-0 kubenswrapper[7271]: E0313 10:35:56.371579 7271 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:35:56.371918 master-0 kubenswrapper[7271]: I0313 10:35:56.371717 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-client-ca\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.371988 master-0 kubenswrapper[7271]: E0313 10:35:56.371913 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-serving-cert podName:d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b nodeName:}" failed. No retries permitted until 2026-03-13 10:35:56.871871381 +0000 UTC m=+11.398693841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-serving-cert") pod "controller-manager-67f68cdb6-lbnl6" (UID: "d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b") : secret "serving-cert" not found Mar 13 10:35:56.371988 master-0 kubenswrapper[7271]: I0313 10:35:56.371966 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwh7h\" (UniqueName: \"kubernetes.io/projected/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-kube-api-access-dwh7h\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.372172 master-0 kubenswrapper[7271]: E0313 10:35:56.372153 7271 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:35:56.372384 master-0 kubenswrapper[7271]: E0313 10:35:56.372368 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-client-ca podName:d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b nodeName:}" failed. No retries permitted until 2026-03-13 10:35:56.872303283 +0000 UTC m=+11.399125863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-client-ca") pod "controller-manager-67f68cdb6-lbnl6" (UID: "d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b") : configmap "client-ca" not found Mar 13 10:35:56.372800 master-0 kubenswrapper[7271]: I0313 10:35:56.372735 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-proxy-ca-bundles\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.372877 master-0 kubenswrapper[7271]: I0313 10:35:56.372820 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-config\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.402433 master-0 kubenswrapper[7271]: I0313 10:35:56.402361 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67f68cdb6-lbnl6"] Mar 13 10:35:56.687491 master-0 kubenswrapper[7271]: I0313 10:35:56.684497 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwh7h\" (UniqueName: \"kubernetes.io/projected/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-kube-api-access-dwh7h\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.687491 master-0 kubenswrapper[7271]: I0313 10:35:56.687425 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7hrd\" (UniqueName: \"kubernetes.io/projected/8d60570a-069b-43fe-be3e-814955fec7ce-kube-api-access-d7hrd\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:56.786255 master-0 kubenswrapper[7271]: I0313 10:35:56.786186 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:56.786549 master-0 kubenswrapper[7271]: E0313 10:35:56.786419 7271 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:35:56.786549 master-0 kubenswrapper[7271]: E0313 10:35:56.786511 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert podName:8d60570a-069b-43fe-be3e-814955fec7ce nodeName:}" failed. No retries permitted until 2026-03-13 10:35:57.786490067 +0000 UTC m=+12.313312457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert") pod "route-controller-manager-fc5589ff-d48hw" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce") : secret "serving-cert" not found Mar 13 10:35:56.786686 master-0 kubenswrapper[7271]: I0313 10:35:56.786626 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:56.786867 master-0 kubenswrapper[7271]: E0313 10:35:56.786818 7271 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:35:56.786928 master-0 kubenswrapper[7271]: E0313 10:35:56.786909 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca podName:8d60570a-069b-43fe-be3e-814955fec7ce nodeName:}" failed. No retries permitted until 2026-03-13 10:35:57.786889257 +0000 UTC m=+12.313711647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca") pod "route-controller-manager-fc5589ff-d48hw" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce") : configmap "client-ca" not found Mar 13 10:35:56.888116 master-0 kubenswrapper[7271]: I0313 10:35:56.887989 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-client-ca\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.888116 master-0 kubenswrapper[7271]: E0313 10:35:56.888016 7271 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:35:56.888579 master-0 kubenswrapper[7271]: E0313 10:35:56.888225 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-client-ca podName:d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b nodeName:}" failed. No retries permitted until 2026-03-13 10:35:57.888202286 +0000 UTC m=+12.415024856 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-client-ca") pod "controller-manager-67f68cdb6-lbnl6" (UID: "d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b") : configmap "client-ca" not found Mar 13 10:35:56.888579 master-0 kubenswrapper[7271]: I0313 10:35:56.888293 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-serving-cert\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:56.888579 master-0 kubenswrapper[7271]: E0313 10:35:56.888505 7271 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:35:56.888579 master-0 kubenswrapper[7271]: E0313 10:35:56.888598 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-serving-cert podName:d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b nodeName:}" failed. No retries permitted until 2026-03-13 10:35:57.888562326 +0000 UTC m=+12.415384716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-serving-cert") pod "controller-manager-67f68cdb6-lbnl6" (UID: "d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b") : secret "serving-cert" not found Mar 13 10:35:56.960157 master-0 kubenswrapper[7271]: I0313 10:35:56.959975 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67f68cdb6-lbnl6"] Mar 13 10:35:56.960436 master-0 kubenswrapper[7271]: E0313 10:35:56.960310 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" podUID="d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b" Mar 13 10:35:57.498040 master-0 kubenswrapper[7271]: I0313 10:35:57.497684 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:57.498040 master-0 kubenswrapper[7271]: I0313 10:35:57.498004 7271 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:35:57.517075 master-0 kubenswrapper[7271]: I0313 10:35:57.516998 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:35:57.783379 master-0 kubenswrapper[7271]: I0313 10:35:57.783244 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:57.791261 master-0 kubenswrapper[7271]: I0313 10:35:57.791230 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:57.800413 master-0 kubenswrapper[7271]: I0313 10:35:57.800341 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:57.800581 master-0 kubenswrapper[7271]: I0313 10:35:57.800500 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:57.800699 master-0 kubenswrapper[7271]: E0313 10:35:57.800670 7271 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:35:57.800781 master-0 kubenswrapper[7271]: E0313 10:35:57.800759 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca podName:8d60570a-069b-43fe-be3e-814955fec7ce nodeName:}" failed. No retries permitted until 2026-03-13 10:35:59.800735515 +0000 UTC m=+14.327557905 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca") pod "route-controller-manager-fc5589ff-d48hw" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce") : configmap "client-ca" not found Mar 13 10:35:57.801008 master-0 kubenswrapper[7271]: E0313 10:35:57.800979 7271 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:35:57.801174 master-0 kubenswrapper[7271]: E0313 10:35:57.801156 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert podName:8d60570a-069b-43fe-be3e-814955fec7ce nodeName:}" failed. No retries permitted until 2026-03-13 10:35:59.801128085 +0000 UTC m=+14.327950655 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert") pod "route-controller-manager-fc5589ff-d48hw" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce") : secret "serving-cert" not found Mar 13 10:35:57.901215 master-0 kubenswrapper[7271]: I0313 10:35:57.901140 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwh7h\" (UniqueName: \"kubernetes.io/projected/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-kube-api-access-dwh7h\") pod \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " Mar 13 10:35:57.901215 master-0 kubenswrapper[7271]: I0313 10:35:57.901211 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-config\") pod \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " Mar 13 10:35:57.901550 master-0 kubenswrapper[7271]: I0313 10:35:57.901433 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-proxy-ca-bundles\") pod \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " Mar 13 10:35:57.901765 master-0 kubenswrapper[7271]: I0313 10:35:57.901718 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-config" (OuterVolumeSpecName: "config") pod "d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b" (UID: "d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:35:57.901827 master-0 kubenswrapper[7271]: I0313 10:35:57.901798 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-client-ca\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:57.902166 master-0 kubenswrapper[7271]: I0313 10:35:57.902129 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-serving-cert\") pod \"controller-manager-67f68cdb6-lbnl6\" (UID: \"d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b\") " pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:57.902166 master-0 kubenswrapper[7271]: I0313 10:35:57.902115 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b" (UID: "d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:35:57.902578 master-0 kubenswrapper[7271]: E0313 10:35:57.902278 7271 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:35:57.902719 master-0 kubenswrapper[7271]: E0313 10:35:57.902603 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-serving-cert podName:d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b nodeName:}" failed. No retries permitted until 2026-03-13 10:35:59.902560667 +0000 UTC m=+14.429383057 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-serving-cert") pod "controller-manager-67f68cdb6-lbnl6" (UID: "d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b") : secret "serving-cert" not found Mar 13 10:35:57.902845 master-0 kubenswrapper[7271]: E0313 10:35:57.902812 7271 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:35:57.902960 master-0 kubenswrapper[7271]: I0313 10:35:57.902852 7271 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:57.903035 master-0 kubenswrapper[7271]: E0313 10:35:57.903022 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-client-ca podName:d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b nodeName:}" failed. No retries permitted until 2026-03-13 10:35:59.902940847 +0000 UTC m=+14.429763237 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-client-ca") pod "controller-manager-67f68cdb6-lbnl6" (UID: "d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b") : configmap "client-ca" not found Mar 13 10:35:57.903129 master-0 kubenswrapper[7271]: I0313 10:35:57.903116 7271 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:57.905089 master-0 kubenswrapper[7271]: I0313 10:35:57.905048 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-kube-api-access-dwh7h" (OuterVolumeSpecName: "kube-api-access-dwh7h") pod "d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b" (UID: "d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b"). InnerVolumeSpecName "kube-api-access-dwh7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:35:58.004611 master-0 kubenswrapper[7271]: I0313 10:35:58.004537 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwh7h\" (UniqueName: \"kubernetes.io/projected/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-kube-api-access-dwh7h\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:58.787321 master-0 kubenswrapper[7271]: I0313 10:35:58.787172 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67f68cdb6-lbnl6" Mar 13 10:35:58.820428 master-0 kubenswrapper[7271]: I0313 10:35:58.820356 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k"] Mar 13 10:35:58.821216 master-0 kubenswrapper[7271]: I0313 10:35:58.821184 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67f68cdb6-lbnl6"] Mar 13 10:35:58.821383 master-0 kubenswrapper[7271]: I0313 10:35:58.821341 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:58.823662 master-0 kubenswrapper[7271]: I0313 10:35:58.823620 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 10:35:58.823790 master-0 kubenswrapper[7271]: I0313 10:35:58.823736 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 10:35:58.824527 master-0 kubenswrapper[7271]: I0313 10:35:58.824406 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 10:35:58.824790 master-0 kubenswrapper[7271]: I0313 10:35:58.824672 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 10:35:58.825238 master-0 kubenswrapper[7271]: I0313 10:35:58.825116 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 10:35:58.833429 master-0 kubenswrapper[7271]: I0313 10:35:58.833316 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 10:35:58.853840 master-0 kubenswrapper[7271]: I0313 10:35:58.853796 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67f68cdb6-lbnl6"] Mar 13 10:35:58.854467 master-0 kubenswrapper[7271]: I0313 10:35:58.854417 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k"] Mar 13 10:35:58.918335 master-0 kubenswrapper[7271]: I0313 10:35:58.918257 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-proxy-ca-bundles\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:58.918664 master-0 kubenswrapper[7271]: I0313 10:35:58.918376 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-config\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:58.918664 master-0 kubenswrapper[7271]: I0313 10:35:58.918401 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxzzm\" (UniqueName: \"kubernetes.io/projected/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-kube-api-access-bxzzm\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:58.918664 master-0 kubenswrapper[7271]: I0313 10:35:58.918420 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:58.918664 master-0 kubenswrapper[7271]: I0313 10:35:58.918435 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:58.918664 master-0 kubenswrapper[7271]: I0313 10:35:58.918493 7271 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:58.918664 master-0 kubenswrapper[7271]: I0313 10:35:58.918504 7271 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:35:59.019631 master-0 kubenswrapper[7271]: I0313 10:35:59.019535 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-proxy-ca-bundles\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:59.020688 master-0 kubenswrapper[7271]: I0313 10:35:59.020166 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-config\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:59.020688 master-0 kubenswrapper[7271]: I0313 10:35:59.020242 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxzzm\" (UniqueName: \"kubernetes.io/projected/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-kube-api-access-bxzzm\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:59.020688 master-0 kubenswrapper[7271]: I0313 10:35:59.020432 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:59.020688 master-0 kubenswrapper[7271]: I0313 10:35:59.020462 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:59.020688 master-0 kubenswrapper[7271]: E0313 10:35:59.020652 7271 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:35:59.020688 master-0 kubenswrapper[7271]: E0313 10:35:59.020696 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert podName:f7bcec3b-b3d9-432d-96b5-ba61d11ab010 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:59.520683303 +0000 UTC m=+14.047505693 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert") pod "controller-manager-7c5b48d77b-g5f7k" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010") : secret "serving-cert" not found Mar 13 10:35:59.020688 master-0 kubenswrapper[7271]: E0313 10:35:59.020700 7271 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:35:59.021222 master-0 kubenswrapper[7271]: E0313 10:35:59.020740 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca podName:f7bcec3b-b3d9-432d-96b5-ba61d11ab010 nodeName:}" failed. No retries permitted until 2026-03-13 10:35:59.520727414 +0000 UTC m=+14.047549814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca") pod "controller-manager-7c5b48d77b-g5f7k" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010") : configmap "client-ca" not found Mar 13 10:35:59.021222 master-0 kubenswrapper[7271]: I0313 10:35:59.020871 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-proxy-ca-bundles\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:59.021539 master-0 kubenswrapper[7271]: I0313 10:35:59.021497 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-config\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:59.050138 master-0 kubenswrapper[7271]: I0313 10:35:59.050071 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxzzm\" (UniqueName: \"kubernetes.io/projected/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-kube-api-access-bxzzm\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:59.527815 master-0 kubenswrapper[7271]: I0313 10:35:59.527624 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:59.527815 master-0 kubenswrapper[7271]: E0313 10:35:59.527765 7271 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:35:59.528142 master-0 kubenswrapper[7271]: I0313 10:35:59.527819 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:35:59.528142 master-0 kubenswrapper[7271]: E0313 10:35:59.527855 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca podName:f7bcec3b-b3d9-432d-96b5-ba61d11ab010 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:00.527836263 +0000 UTC m=+15.054658653 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca") pod "controller-manager-7c5b48d77b-g5f7k" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010") : configmap "client-ca" not found Mar 13 10:35:59.528142 master-0 kubenswrapper[7271]: E0313 10:35:59.527959 7271 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:35:59.528142 master-0 kubenswrapper[7271]: E0313 10:35:59.528024 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert podName:f7bcec3b-b3d9-432d-96b5-ba61d11ab010 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:00.528007797 +0000 UTC m=+15.054830187 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert") pod "controller-manager-7c5b48d77b-g5f7k" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010") : secret "serving-cert" not found Mar 13 10:35:59.652445 master-0 kubenswrapper[7271]: I0313 10:35:59.652374 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b" path="/var/lib/kubelet/pods/d2e0bf4f-fa32-4b11-a135-ea8d7f806d0b/volumes" Mar 13 10:35:59.831761 master-0 kubenswrapper[7271]: I0313 10:35:59.831664 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:59.831761 master-0 kubenswrapper[7271]: I0313 10:35:59.831777 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:35:59.832721 master-0 kubenswrapper[7271]: E0313 10:35:59.831944 7271 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:35:59.832721 master-0 kubenswrapper[7271]: E0313 10:35:59.832065 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca podName:8d60570a-069b-43fe-be3e-814955fec7ce nodeName:}" failed. No retries permitted until 2026-03-13 10:36:03.83203414 +0000 UTC m=+18.358856760 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca") pod "route-controller-manager-fc5589ff-d48hw" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce") : configmap "client-ca" not found Mar 13 10:35:59.832721 master-0 kubenswrapper[7271]: E0313 10:35:59.832322 7271 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:35:59.832721 master-0 kubenswrapper[7271]: E0313 10:35:59.832430 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert podName:8d60570a-069b-43fe-be3e-814955fec7ce nodeName:}" failed. No retries permitted until 2026-03-13 10:36:03.832382989 +0000 UTC m=+18.359205549 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert") pod "route-controller-manager-fc5589ff-d48hw" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce") : secret "serving-cert" not found Mar 13 10:36:00.541814 master-0 kubenswrapper[7271]: I0313 10:36:00.541726 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:36:00.541814 master-0 kubenswrapper[7271]: I0313 10:36:00.541793 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:36:00.542244 master-0 kubenswrapper[7271]: E0313 10:36:00.541993 7271 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:36:00.542244 master-0 kubenswrapper[7271]: E0313 10:36:00.542130 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca podName:f7bcec3b-b3d9-432d-96b5-ba61d11ab010 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.542104605 +0000 UTC m=+17.068927165 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca") pod "controller-manager-7c5b48d77b-g5f7k" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010") : configmap "client-ca" not found Mar 13 10:36:00.542363 master-0 kubenswrapper[7271]: E0313 10:36:00.542309 7271 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:36:00.542464 master-0 kubenswrapper[7271]: E0313 10:36:00.542441 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert podName:f7bcec3b-b3d9-432d-96b5-ba61d11ab010 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:02.542412524 +0000 UTC m=+17.069235084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert") pod "controller-manager-7c5b48d77b-g5f7k" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010") : secret "serving-cert" not found Mar 13 10:36:01.799849 master-0 kubenswrapper[7271]: I0313 10:36:01.799770 7271 generic.go:334] "Generic (PLEG): container finished" podID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerID="5948c776742a66ca9c8dc4ab4653ab39ea0f5fc6e05a6a107b0cddf0d69c875e" exitCode=0 Mar 13 10:36:01.799849 master-0 kubenswrapper[7271]: I0313 10:36:01.799841 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" event={"ID":"6ed47c57-533f-43e4-88eb-07da29b4878f","Type":"ContainerDied","Data":"5948c776742a66ca9c8dc4ab4653ab39ea0f5fc6e05a6a107b0cddf0d69c875e"} Mar 13 10:36:02.372955 master-0 kubenswrapper[7271]: I0313 10:36:02.372705 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:36:02.372955 master-0 kubenswrapper[7271]: I0313 10:36:02.372791 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:36:02.372955 master-0 kubenswrapper[7271]: I0313 10:36:02.372824 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:36:02.372955 master-0 kubenswrapper[7271]: I0313 10:36:02.372857 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:36:02.373319 master-0 kubenswrapper[7271]: E0313 10:36:02.373017 7271 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:36:02.373319 master-0 kubenswrapper[7271]: E0313 10:36:02.373126 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls podName:7667717b-fb74-456b-8615-16475cb69e98 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:18.373104319 +0000 UTC m=+32.899926709 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls") pod "ingress-operator-677db989d6-tzd9b" (UID: "7667717b-fb74-456b-8615-16475cb69e98") : secret "metrics-tls" not found Mar 13 10:36:02.373765 master-0 kubenswrapper[7271]: E0313 10:36:02.373458 7271 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 10:36:02.373861 master-0 kubenswrapper[7271]: E0313 10:36:02.373775 7271 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 10:36:02.373907 master-0 kubenswrapper[7271]: E0313 10:36:02.373522 7271 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 10:36:02.373942 master-0 kubenswrapper[7271]: I0313 10:36:02.373863 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:36:02.374066 master-0 kubenswrapper[7271]: E0313 10:36:02.373869 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:18.373847879 +0000 UTC m=+32.900670469 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "node-tuning-operator-tls" not found Mar 13 10:36:02.374114 master-0 kubenswrapper[7271]: E0313 10:36:02.374071 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls podName:8cf9326b-bc23-45c2-82c4-9c08c739ac5a nodeName:}" failed. No retries permitted until 2026-03-13 10:36:18.374057695 +0000 UTC m=+32.900880235 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-492v4" (UID: "8cf9326b-bc23-45c2-82c4-9c08c739ac5a") : secret "image-registry-operator-tls" not found Mar 13 10:36:02.374114 master-0 kubenswrapper[7271]: E0313 10:36:02.374093 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls podName:4d5479f3-51ec-4b93-8188-21cdda44828d nodeName:}" failed. No retries permitted until 2026-03-13 10:36:18.374084986 +0000 UTC m=+32.900907376 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-vk9qz" (UID: "4d5479f3-51ec-4b93-8188-21cdda44828d") : secret "cluster-monitoring-operator-tls" not found Mar 13 10:36:02.374114 master-0 kubenswrapper[7271]: I0313 10:36:02.374110 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:36:02.374221 master-0 kubenswrapper[7271]: E0313 10:36:02.374024 7271 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:36:02.374221 master-0 kubenswrapper[7271]: I0313 10:36:02.374175 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:36:02.374221 master-0 kubenswrapper[7271]: I0313 10:36:02.374211 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:36:02.374325 master-0 kubenswrapper[7271]: E0313 10:36:02.374287 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls podName:3ff2ab1c-7057-4e18-8e32-68807f86532a nodeName:}" failed. No retries permitted until 2026-03-13 10:36:18.37423262 +0000 UTC m=+32.901055210 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls") pod "dns-operator-589895fbb7-wjrpm" (UID: "3ff2ab1c-7057-4e18-8e32-68807f86532a") : secret "metrics-tls" not found Mar 13 10:36:02.374374 master-0 kubenswrapper[7271]: E0313 10:36:02.374341 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 10:36:02.374374 master-0 kubenswrapper[7271]: E0313 10:36:02.374364 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert podName:2afe3890-e844-4dd3-ba49-3ac9178549bf nodeName:}" failed. No retries permitted until 2026-03-13 10:36:18.374357723 +0000 UTC m=+32.901180113 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert") pod "olm-operator-d64cfc9db-rsl2h" (UID: "2afe3890-e844-4dd3-ba49-3ac9178549bf") : secret "olm-operator-serving-cert" not found Mar 13 10:36:02.374481 master-0 kubenswrapper[7271]: E0313 10:36:02.374436 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 10:36:02.374538 master-0 kubenswrapper[7271]: E0313 10:36:02.374503 7271 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 10:36:02.374538 master-0 kubenswrapper[7271]: E0313 10:36:02.374531 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert podName:42b4d53c-af72-44c8-9605-271445f95f87 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:18.374524267 +0000 UTC m=+32.901346867 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-9fptc" (UID: "42b4d53c-af72-44c8-9605-271445f95f87") : secret "performance-addon-operator-webhook-cert" not found Mar 13 10:36:02.374692 master-0 kubenswrapper[7271]: E0313 10:36:02.374545 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert podName:c455a959-d764-4b4f-a1e0-95c27495dd9d nodeName:}" failed. No retries permitted until 2026-03-13 10:36:18.374538328 +0000 UTC m=+32.901360718 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert") pod "catalog-operator-7d9c49f57b-2j5jl" (UID: "c455a959-d764-4b4f-a1e0-95c27495dd9d") : secret "catalog-operator-serving-cert" not found Mar 13 10:36:02.475174 master-0 kubenswrapper[7271]: I0313 10:36:02.475103 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:36:02.475174 master-0 kubenswrapper[7271]: I0313 10:36:02.475167 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:36:02.475429 master-0 kubenswrapper[7271]: I0313 10:36:02.475193 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:36:02.475429 master-0 kubenswrapper[7271]: E0313 10:36:02.475382 7271 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 10:36:02.475511 master-0 kubenswrapper[7271]: E0313 10:36:02.475487 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics podName:66f49a19-0e3b-4611-b8a6-5f5687fa20b6 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:18.475463322 +0000 UTC m=+33.002285912 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-85x6d" (UID: "66f49a19-0e3b-4611-b8a6-5f5687fa20b6") : secret "marketplace-operator-metrics" not found Mar 13 10:36:02.475568 master-0 kubenswrapper[7271]: I0313 10:36:02.475538 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:36:02.475643 master-0 kubenswrapper[7271]: E0313 10:36:02.475609 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 10:36:02.475724 master-0 kubenswrapper[7271]: I0313 10:36:02.475629 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:36:02.475724 master-0 kubenswrapper[7271]: E0313 10:36:02.475707 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert podName:8a305f45-8689-45a8-8c8b-5954f2c863df nodeName:}" failed. No retries permitted until 2026-03-13 10:36:18.475683178 +0000 UTC m=+33.002505568 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-d5b45" (UID: "8a305f45-8689-45a8-8c8b-5954f2c863df") : secret "package-server-manager-serving-cert" not found Mar 13 10:36:02.475818 master-0 kubenswrapper[7271]: E0313 10:36:02.475774 7271 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 10:36:02.475818 master-0 kubenswrapper[7271]: E0313 10:36:02.475776 7271 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 10:36:02.475902 master-0 kubenswrapper[7271]: E0313 10:36:02.475833 7271 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 10:36:02.475902 master-0 kubenswrapper[7271]: E0313 10:36:02.475841 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert podName:4aaf36b4-e556-4723-a624-aa2edc69c301 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:18.475818102 +0000 UTC m=+33.002640492 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert") pod "cluster-version-operator-745944c6b7-s6k7z" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301") : secret "cluster-version-operator-serving-cert" not found Mar 13 10:36:02.476072 master-0 kubenswrapper[7271]: E0313 10:36:02.476042 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:36:18.476018687 +0000 UTC m=+33.002841247 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : secret "metrics-daemon-secret" not found Mar 13 10:36:02.476133 master-0 kubenswrapper[7271]: E0313 10:36:02.476074 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs podName:95339220-324d-45e7-bdc2-e4f42fbd1d32 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:18.476062898 +0000 UTC m=+33.002885528 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs") pod "multus-admission-controller-8d675b596-d787l" (UID: "95339220-324d-45e7-bdc2-e4f42fbd1d32") : secret "multus-admission-controller-secret" not found Mar 13 10:36:02.577674 master-0 kubenswrapper[7271]: I0313 10:36:02.577280 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:36:02.577674 master-0 kubenswrapper[7271]: I0313 10:36:02.577354 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:36:02.578067 master-0 kubenswrapper[7271]: E0313 10:36:02.577709 7271 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:36:02.578067 master-0 kubenswrapper[7271]: E0313 10:36:02.577848 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca podName:f7bcec3b-b3d9-432d-96b5-ba61d11ab010 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:06.577798094 +0000 UTC m=+21.104620484 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca") pod "controller-manager-7c5b48d77b-g5f7k" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010") : configmap "client-ca" not found Mar 13 10:36:02.578648 master-0 kubenswrapper[7271]: E0313 10:36:02.578418 7271 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:36:02.578719 master-0 kubenswrapper[7271]: E0313 10:36:02.578696 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert podName:f7bcec3b-b3d9-432d-96b5-ba61d11ab010 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:06.578674708 +0000 UTC m=+21.105497318 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert") pod "controller-manager-7c5b48d77b-g5f7k" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010") : secret "serving-cert" not found Mar 13 10:36:02.805232 master-0 kubenswrapper[7271]: I0313 10:36:02.805155 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" event={"ID":"a1a998af-4fc0-4078-a6a0-93dde6c00508","Type":"ContainerStarted","Data":"dbff0a4ca77dfd3c5dce218a106dba837080cd80ee7f274b5ebceb8f682ccabd"} Mar 13 10:36:03.811431 master-0 kubenswrapper[7271]: I0313 10:36:03.811090 7271 generic.go:334] "Generic (PLEG): container finished" podID="b8d40b37-0f3d-4531-9fa8-eda965d2337d" containerID="11c77f1b96585ddf0a5deeffc87c0df0c85a30ab4a6f38b300cbba0aba3b3555" exitCode=0 Mar 13 10:36:03.812455 master-0 kubenswrapper[7271]: I0313 10:36:03.811169 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" event={"ID":"b8d40b37-0f3d-4531-9fa8-eda965d2337d","Type":"ContainerDied","Data":"11c77f1b96585ddf0a5deeffc87c0df0c85a30ab4a6f38b300cbba0aba3b3555"} Mar 13 10:36:03.816345 master-0 kubenswrapper[7271]: I0313 10:36:03.816221 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" event={"ID":"ec3168fc-6c8f-4603-94e0-17b1ae22a802","Type":"ContainerStarted","Data":"1920e0c05ffebe7a0fab80b000aebd0c99a9626ca78c9c2b099c218c0c998378"} Mar 13 10:36:03.898565 master-0 kubenswrapper[7271]: I0313 10:36:03.898174 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:36:03.898565 master-0 kubenswrapper[7271]: E0313 10:36:03.898310 7271 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:36:03.898565 master-0 kubenswrapper[7271]: E0313 10:36:03.898406 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca podName:8d60570a-069b-43fe-be3e-814955fec7ce nodeName:}" failed. No retries permitted until 2026-03-13 10:36:11.898386199 +0000 UTC m=+26.425208589 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca") pod "route-controller-manager-fc5589ff-d48hw" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce") : configmap "client-ca" not found Mar 13 10:36:03.898565 master-0 kubenswrapper[7271]: I0313 10:36:03.898532 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:36:03.898971 master-0 kubenswrapper[7271]: E0313 10:36:03.898713 7271 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:36:03.898971 master-0 kubenswrapper[7271]: E0313 10:36:03.898774 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert podName:8d60570a-069b-43fe-be3e-814955fec7ce nodeName:}" failed. No retries permitted until 2026-03-13 10:36:11.89876131 +0000 UTC m=+26.425583920 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert") pod "route-controller-manager-fc5589ff-d48hw" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce") : secret "serving-cert" not found Mar 13 10:36:04.531830 master-0 kubenswrapper[7271]: I0313 10:36:04.530969 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv"] Mar 13 10:36:04.531830 master-0 kubenswrapper[7271]: I0313 10:36:04.531676 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv" Mar 13 10:36:04.532415 master-0 kubenswrapper[7271]: I0313 10:36:04.532292 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv"] Mar 13 10:36:04.534702 master-0 kubenswrapper[7271]: I0313 10:36:04.534202 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 10:36:04.534702 master-0 kubenswrapper[7271]: I0313 10:36:04.534411 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 10:36:04.606880 master-0 kubenswrapper[7271]: I0313 10:36:04.606763 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k8rp\" (UniqueName: \"kubernetes.io/projected/d288e5d0-0976-477f-be14-b3d5828e0482-kube-api-access-5k8rp\") pod \"migrator-57ccdf9b5-fgvbv\" (UID: \"d288e5d0-0976-477f-be14-b3d5828e0482\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv" Mar 13 10:36:04.708855 master-0 kubenswrapper[7271]: I0313 10:36:04.708704 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k8rp\" (UniqueName: \"kubernetes.io/projected/d288e5d0-0976-477f-be14-b3d5828e0482-kube-api-access-5k8rp\") pod \"migrator-57ccdf9b5-fgvbv\" (UID: \"d288e5d0-0976-477f-be14-b3d5828e0482\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv" Mar 13 10:36:04.729842 master-0 kubenswrapper[7271]: I0313 10:36:04.728788 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k8rp\" (UniqueName: \"kubernetes.io/projected/d288e5d0-0976-477f-be14-b3d5828e0482-kube-api-access-5k8rp\") pod \"migrator-57ccdf9b5-fgvbv\" (UID: \"d288e5d0-0976-477f-be14-b3d5828e0482\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv" Mar 13 10:36:04.826372 master-0 kubenswrapper[7271]: I0313 10:36:04.826300 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" event={"ID":"5ed5e77b-948b-4d94-ac9f-440ee3c07e18","Type":"ContainerStarted","Data":"dacb5471d19718622299f0fa6f9e909a820c9329353d0e6ad130c4eb61cefa28"} Mar 13 10:36:04.862275 master-0 kubenswrapper[7271]: I0313 10:36:04.862189 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv" Mar 13 10:36:05.870803 master-0 kubenswrapper[7271]: I0313 10:36:05.869818 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-gdjjd" event={"ID":"b12e76f4-b960-4534-90e6-a2cdbecd1728","Type":"ContainerStarted","Data":"7a084911f51ef0b6e0fe289667eda8e019242097416acb49dfe31435aba976e2"} Mar 13 10:36:06.010287 master-0 kubenswrapper[7271]: I0313 10:36:06.007113 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv"] Mar 13 10:36:06.023377 master-0 kubenswrapper[7271]: W0313 10:36:06.022886 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd288e5d0_0976_477f_be14_b3d5828e0482.slice/crio-0703b273c4e03bb7b56beaec08bcd6e173eecb63d506d0b227eed01f4963105d WatchSource:0}: Error finding container 0703b273c4e03bb7b56beaec08bcd6e173eecb63d506d0b227eed01f4963105d: Status 404 returned error can't find the container with id 0703b273c4e03bb7b56beaec08bcd6e173eecb63d506d0b227eed01f4963105d Mar 13 10:36:06.631814 master-0 kubenswrapper[7271]: I0313 10:36:06.631435 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:36:06.631814 master-0 kubenswrapper[7271]: I0313 10:36:06.631799 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:36:06.631814 master-0 kubenswrapper[7271]: E0313 10:36:06.631741 7271 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:36:06.632181 master-0 kubenswrapper[7271]: E0313 10:36:06.631871 7271 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:36:06.632181 master-0 kubenswrapper[7271]: E0313 10:36:06.631877 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca podName:f7bcec3b-b3d9-432d-96b5-ba61d11ab010 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:14.631858234 +0000 UTC m=+29.158680624 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca") pod "controller-manager-7c5b48d77b-g5f7k" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010") : configmap "client-ca" not found Mar 13 10:36:06.632371 master-0 kubenswrapper[7271]: E0313 10:36:06.632321 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert podName:f7bcec3b-b3d9-432d-96b5-ba61d11ab010 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:14.632289595 +0000 UTC m=+29.159111985 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert") pod "controller-manager-7c5b48d77b-g5f7k" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010") : secret "serving-cert" not found Mar 13 10:36:06.873858 master-0 kubenswrapper[7271]: I0313 10:36:06.873798 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv" event={"ID":"d288e5d0-0976-477f-be14-b3d5828e0482","Type":"ContainerStarted","Data":"0703b273c4e03bb7b56beaec08bcd6e173eecb63d506d0b227eed01f4963105d"} Mar 13 10:36:08.887629 master-0 kubenswrapper[7271]: I0313 10:36:08.887516 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" event={"ID":"6ed47c57-533f-43e4-88eb-07da29b4878f","Type":"ContainerStarted","Data":"f5cc508c8bba11aea5ee45f0185ba6b283bf13e245305fcd3727611ac4aa5998"} Mar 13 10:36:08.888615 master-0 kubenswrapper[7271]: I0313 10:36:08.887844 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:36:08.895656 master-0 kubenswrapper[7271]: I0313 10:36:08.895134 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" event={"ID":"37b2e803-302b-4650-b18f-d3d2dd703bd5","Type":"ContainerStarted","Data":"0726d914d99337ac6ae1fc3306b6380d27700c4e1ef052dd78af4add66671237"} Mar 13 10:36:08.897281 master-0 kubenswrapper[7271]: I0313 10:36:08.897232 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx" event={"ID":"866cf034-8fd8-4f16-8e9b-68627228aa8d","Type":"ContainerStarted","Data":"838b4cfccf523638ccd0bf31bf9b16492b12c33b0f070423ea23f66b9d72c78e"} Mar 13 10:36:08.901040 master-0 kubenswrapper[7271]: I0313 10:36:08.900988 7271 generic.go:334] "Generic (PLEG): container finished" podID="b8d40b37-0f3d-4531-9fa8-eda965d2337d" containerID="220a150d44b2158d9daff116df4a5c802964a9b842e1b8dda3de819c2cb69708" exitCode=0 Mar 13 10:36:08.901178 master-0 kubenswrapper[7271]: I0313 10:36:08.901047 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" event={"ID":"b8d40b37-0f3d-4531-9fa8-eda965d2337d","Type":"ContainerDied","Data":"220a150d44b2158d9daff116df4a5c802964a9b842e1b8dda3de819c2cb69708"} Mar 13 10:36:09.150883 master-0 kubenswrapper[7271]: I0313 10:36:09.148832 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt"] Mar 13 10:36:09.150883 master-0 kubenswrapper[7271]: I0313 10:36:09.149748 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" Mar 13 10:36:09.161055 master-0 kubenswrapper[7271]: I0313 10:36:09.160947 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt"] Mar 13 10:36:09.281846 master-0 kubenswrapper[7271]: I0313 10:36:09.281787 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqg6g\" (UniqueName: \"kubernetes.io/projected/6622be09-206e-4d02-90ca-6d9f2fc852aa-kube-api-access-lqg6g\") pod \"csi-snapshot-controller-7577d6f48-cbhxt\" (UID: \"6622be09-206e-4d02-90ca-6d9f2fc852aa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" Mar 13 10:36:09.383110 master-0 kubenswrapper[7271]: I0313 10:36:09.383046 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqg6g\" (UniqueName: \"kubernetes.io/projected/6622be09-206e-4d02-90ca-6d9f2fc852aa-kube-api-access-lqg6g\") pod \"csi-snapshot-controller-7577d6f48-cbhxt\" (UID: \"6622be09-206e-4d02-90ca-6d9f2fc852aa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" Mar 13 10:36:09.407631 master-0 kubenswrapper[7271]: I0313 10:36:09.407245 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqg6g\" (UniqueName: \"kubernetes.io/projected/6622be09-206e-4d02-90ca-6d9f2fc852aa-kube-api-access-lqg6g\") pod \"csi-snapshot-controller-7577d6f48-cbhxt\" (UID: \"6622be09-206e-4d02-90ca-6d9f2fc852aa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" Mar 13 10:36:09.496228 master-0 kubenswrapper[7271]: I0313 10:36:09.496169 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" Mar 13 10:36:09.724394 master-0 kubenswrapper[7271]: I0313 10:36:09.722708 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt"] Mar 13 10:36:09.908122 master-0 kubenswrapper[7271]: I0313 10:36:09.908033 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv" event={"ID":"d288e5d0-0976-477f-be14-b3d5828e0482","Type":"ContainerStarted","Data":"5a9c616bfe0a062e544c9f1d9db25f3545639185ec68c6ca8af3e999e37c63b7"} Mar 13 10:36:09.908122 master-0 kubenswrapper[7271]: I0313 10:36:09.908093 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv" event={"ID":"d288e5d0-0976-477f-be14-b3d5828e0482","Type":"ContainerStarted","Data":"02d9c9a8e3fe398796e9442be5d23ce35f4b717b83192e2fd31d56ea7dbd2404"} Mar 13 10:36:09.910398 master-0 kubenswrapper[7271]: I0313 10:36:09.910363 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" event={"ID":"6622be09-206e-4d02-90ca-6d9f2fc852aa","Type":"ContainerStarted","Data":"40709622bc83dd44130ec2874b3fecd53ec9c74c9ec5ea39d2f7a0dcddaf6a5c"} Mar 13 10:36:09.956939 master-0 kubenswrapper[7271]: I0313 10:36:09.954741 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv" podStartSLOduration=2.491789543 podStartE2EDuration="5.954709986s" podCreationTimestamp="2026-03-13 10:36:04 +0000 UTC" firstStartedPulling="2026-03-13 10:36:06.026225726 +0000 UTC m=+20.553048116" lastFinishedPulling="2026-03-13 10:36:09.489146169 +0000 UTC m=+24.015968559" observedRunningTime="2026-03-13 10:36:09.952102515 +0000 UTC m=+24.478924925" watchObservedRunningTime="2026-03-13 10:36:09.954709986 +0000 UTC m=+24.481532376" Mar 13 10:36:11.325427 master-0 kubenswrapper[7271]: I0313 10:36:11.325352 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 10:36:11.326256 master-0 kubenswrapper[7271]: I0313 10:36:11.326091 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 10:36:11.327765 master-0 kubenswrapper[7271]: I0313 10:36:11.327709 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 10:36:11.332006 master-0 kubenswrapper[7271]: I0313 10:36:11.331911 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 10:36:11.409503 master-0 kubenswrapper[7271]: I0313 10:36:11.409423 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-var-lock\") pod \"installer-1-master-0\" (UID: \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 10:36:11.409503 master-0 kubenswrapper[7271]: I0313 10:36:11.409503 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 10:36:11.409884 master-0 kubenswrapper[7271]: I0313 10:36:11.409806 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 10:36:11.511521 master-0 kubenswrapper[7271]: I0313 10:36:11.511456 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 10:36:11.511521 master-0 kubenswrapper[7271]: I0313 10:36:11.511537 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-var-lock\") pod \"installer-1-master-0\" (UID: \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 10:36:11.511849 master-0 kubenswrapper[7271]: I0313 10:36:11.511707 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-var-lock\") pod \"installer-1-master-0\" (UID: \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 10:36:11.511849 master-0 kubenswrapper[7271]: I0313 10:36:11.511799 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 10:36:11.511914 master-0 kubenswrapper[7271]: I0313 10:36:11.511884 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 10:36:11.535783 master-0 kubenswrapper[7271]: I0313 10:36:11.535711 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 10:36:11.653886 master-0 kubenswrapper[7271]: I0313 10:36:11.653707 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 10:36:11.924453 master-0 kubenswrapper[7271]: I0313 10:36:11.918327 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:36:11.924453 master-0 kubenswrapper[7271]: I0313 10:36:11.918454 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:36:11.924453 master-0 kubenswrapper[7271]: E0313 10:36:11.918662 7271 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:36:11.924453 master-0 kubenswrapper[7271]: E0313 10:36:11.918986 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert podName:8d60570a-069b-43fe-be3e-814955fec7ce nodeName:}" failed. No retries permitted until 2026-03-13 10:36:27.918962615 +0000 UTC m=+42.445785005 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert") pod "route-controller-manager-fc5589ff-d48hw" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce") : secret "serving-cert" not found Mar 13 10:36:11.924453 master-0 kubenswrapper[7271]: E0313 10:36:11.921011 7271 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:36:11.924453 master-0 kubenswrapper[7271]: E0313 10:36:11.921074 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca podName:8d60570a-069b-43fe-be3e-814955fec7ce nodeName:}" failed. No retries permitted until 2026-03-13 10:36:27.921057332 +0000 UTC m=+42.447879722 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca") pod "route-controller-manager-fc5589ff-d48hw" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce") : configmap "client-ca" not found Mar 13 10:36:12.252094 master-0 kubenswrapper[7271]: I0313 10:36:12.251725 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-l8h7l"] Mar 13 10:36:12.252707 master-0 kubenswrapper[7271]: I0313 10:36:12.252681 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:36:12.257959 master-0 kubenswrapper[7271]: I0313 10:36:12.257551 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 10:36:12.258715 master-0 kubenswrapper[7271]: I0313 10:36:12.258350 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 10:36:12.259798 master-0 kubenswrapper[7271]: I0313 10:36:12.259759 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 10:36:12.259988 master-0 kubenswrapper[7271]: I0313 10:36:12.258009 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 10:36:12.261823 master-0 kubenswrapper[7271]: I0313 10:36:12.261795 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-l8h7l"] Mar 13 10:36:12.327463 master-0 kubenswrapper[7271]: I0313 10:36:12.326327 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 10:36:12.347850 master-0 kubenswrapper[7271]: W0313 10:36:12.347807 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podee9ccb5b_e38c_45dd_a762_3ece1ffa80bf.slice/crio-6846571eb7604f88adba4d52809ed29920ec8a6a32a2b601655a0f4b9e49c442 WatchSource:0}: Error finding container 6846571eb7604f88adba4d52809ed29920ec8a6a32a2b601655a0f4b9e49c442: Status 404 returned error can't find the container with id 6846571eb7604f88adba4d52809ed29920ec8a6a32a2b601655a0f4b9e49c442 Mar 13 10:36:12.353106 master-0 kubenswrapper[7271]: I0313 10:36:12.353034 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/549bd192-0235-4994-b485-f1b10d16f6b5-signing-key\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:36:12.353186 master-0 kubenswrapper[7271]: I0313 10:36:12.353105 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/549bd192-0235-4994-b485-f1b10d16f6b5-signing-cabundle\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:36:12.353260 master-0 kubenswrapper[7271]: I0313 10:36:12.353237 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwqp6\" (UniqueName: \"kubernetes.io/projected/549bd192-0235-4994-b485-f1b10d16f6b5-kube-api-access-pwqp6\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:36:12.456706 master-0 kubenswrapper[7271]: I0313 10:36:12.456341 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/549bd192-0235-4994-b485-f1b10d16f6b5-signing-key\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:36:12.456706 master-0 kubenswrapper[7271]: I0313 10:36:12.456414 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/549bd192-0235-4994-b485-f1b10d16f6b5-signing-cabundle\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:36:12.456706 master-0 kubenswrapper[7271]: I0313 10:36:12.456463 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwqp6\" (UniqueName: \"kubernetes.io/projected/549bd192-0235-4994-b485-f1b10d16f6b5-kube-api-access-pwqp6\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:36:12.458791 master-0 kubenswrapper[7271]: I0313 10:36:12.458730 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/549bd192-0235-4994-b485-f1b10d16f6b5-signing-cabundle\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:36:12.464026 master-0 kubenswrapper[7271]: I0313 10:36:12.463963 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/549bd192-0235-4994-b485-f1b10d16f6b5-signing-key\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:36:12.481713 master-0 kubenswrapper[7271]: I0313 10:36:12.481665 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwqp6\" (UniqueName: \"kubernetes.io/projected/549bd192-0235-4994-b485-f1b10d16f6b5-kube-api-access-pwqp6\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:36:12.613268 master-0 kubenswrapper[7271]: I0313 10:36:12.613191 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:36:12.801049 master-0 kubenswrapper[7271]: I0313 10:36:12.800425 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-l8h7l"] Mar 13 10:36:12.806642 master-0 kubenswrapper[7271]: W0313 10:36:12.806570 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod549bd192_0235_4994_b485_f1b10d16f6b5.slice/crio-d4855d0948cb05692ea183985279dcd65ea773074cc3b3ff4694481fd014efe8 WatchSource:0}: Error finding container d4855d0948cb05692ea183985279dcd65ea773074cc3b3ff4694481fd014efe8: Status 404 returned error can't find the container with id d4855d0948cb05692ea183985279dcd65ea773074cc3b3ff4694481fd014efe8 Mar 13 10:36:12.933057 master-0 kubenswrapper[7271]: I0313 10:36:12.932966 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" event={"ID":"549bd192-0235-4994-b485-f1b10d16f6b5","Type":"ContainerStarted","Data":"d4855d0948cb05692ea183985279dcd65ea773074cc3b3ff4694481fd014efe8"} Mar 13 10:36:12.934922 master-0 kubenswrapper[7271]: I0313 10:36:12.934857 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" event={"ID":"6622be09-206e-4d02-90ca-6d9f2fc852aa","Type":"ContainerStarted","Data":"426e576deb6604dde643ee98f5460b9f1475fda12e39205758c5b7f3ec56452f"} Mar 13 10:36:12.938066 master-0 kubenswrapper[7271]: I0313 10:36:12.937451 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" event={"ID":"b8d40b37-0f3d-4531-9fa8-eda965d2337d","Type":"ContainerStarted","Data":"a242486632cda89db044ed9feff7bb156e404c15924daa0514297e6cfa246629"} Mar 13 10:36:12.939081 master-0 kubenswrapper[7271]: I0313 10:36:12.938739 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf","Type":"ContainerStarted","Data":"d84eecae63542ba948d53d567f42cad7dd26e9b2bfc0e6b741cc53afc3e9e71f"} Mar 13 10:36:12.939081 master-0 kubenswrapper[7271]: I0313 10:36:12.938765 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf","Type":"ContainerStarted","Data":"6846571eb7604f88adba4d52809ed29920ec8a6a32a2b601655a0f4b9e49c442"} Mar 13 10:36:13.052665 master-0 kubenswrapper[7271]: I0313 10:36:13.050753 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" podStartSLOduration=1.625010939 podStartE2EDuration="4.050680414s" podCreationTimestamp="2026-03-13 10:36:09 +0000 UTC" firstStartedPulling="2026-03-13 10:36:09.745219631 +0000 UTC m=+24.272042021" lastFinishedPulling="2026-03-13 10:36:12.170889106 +0000 UTC m=+26.697711496" observedRunningTime="2026-03-13 10:36:13.046007518 +0000 UTC m=+27.572829958" watchObservedRunningTime="2026-03-13 10:36:13.050680414 +0000 UTC m=+27.577502844" Mar 13 10:36:13.147885 master-0 kubenswrapper[7271]: I0313 10:36:13.147787 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=2.147762004 podStartE2EDuration="2.147762004s" podCreationTimestamp="2026-03-13 10:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:36:13.146047098 +0000 UTC m=+27.672869488" watchObservedRunningTime="2026-03-13 10:36:13.147762004 +0000 UTC m=+27.674584404" Mar 13 10:36:13.200422 master-0 kubenswrapper[7271]: I0313 10:36:13.200363 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 13 10:36:13.201032 master-0 kubenswrapper[7271]: I0313 10:36:13.201003 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 10:36:13.204433 master-0 kubenswrapper[7271]: I0313 10:36:13.204377 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 13 10:36:13.265983 master-0 kubenswrapper[7271]: I0313 10:36:13.265533 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 13 10:36:13.369182 master-0 kubenswrapper[7271]: I0313 10:36:13.369093 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00e8e251-40d9-458a-92a7-9b2e91dc7359-var-lock\") pod \"installer-1-master-0\" (UID: \"00e8e251-40d9-458a-92a7-9b2e91dc7359\") " pod="openshift-etcd/installer-1-master-0" Mar 13 10:36:13.370830 master-0 kubenswrapper[7271]: I0313 10:36:13.369319 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00e8e251-40d9-458a-92a7-9b2e91dc7359-kube-api-access\") pod \"installer-1-master-0\" (UID: \"00e8e251-40d9-458a-92a7-9b2e91dc7359\") " pod="openshift-etcd/installer-1-master-0" Mar 13 10:36:13.370830 master-0 kubenswrapper[7271]: I0313 10:36:13.369407 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00e8e251-40d9-458a-92a7-9b2e91dc7359-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"00e8e251-40d9-458a-92a7-9b2e91dc7359\") " pod="openshift-etcd/installer-1-master-0" Mar 13 10:36:13.445759 master-0 kubenswrapper[7271]: I0313 10:36:13.445703 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:36:13.470402 master-0 kubenswrapper[7271]: I0313 10:36:13.470340 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00e8e251-40d9-458a-92a7-9b2e91dc7359-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"00e8e251-40d9-458a-92a7-9b2e91dc7359\") " pod="openshift-etcd/installer-1-master-0" Mar 13 10:36:13.470696 master-0 kubenswrapper[7271]: I0313 10:36:13.470523 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00e8e251-40d9-458a-92a7-9b2e91dc7359-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"00e8e251-40d9-458a-92a7-9b2e91dc7359\") " pod="openshift-etcd/installer-1-master-0" Mar 13 10:36:13.470975 master-0 kubenswrapper[7271]: I0313 10:36:13.470951 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00e8e251-40d9-458a-92a7-9b2e91dc7359-var-lock\") pod \"installer-1-master-0\" (UID: \"00e8e251-40d9-458a-92a7-9b2e91dc7359\") " pod="openshift-etcd/installer-1-master-0" Mar 13 10:36:13.471092 master-0 kubenswrapper[7271]: I0313 10:36:13.471047 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00e8e251-40d9-458a-92a7-9b2e91dc7359-var-lock\") pod \"installer-1-master-0\" (UID: \"00e8e251-40d9-458a-92a7-9b2e91dc7359\") " pod="openshift-etcd/installer-1-master-0" Mar 13 10:36:13.471148 master-0 kubenswrapper[7271]: I0313 10:36:13.471072 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00e8e251-40d9-458a-92a7-9b2e91dc7359-kube-api-access\") pod \"installer-1-master-0\" (UID: \"00e8e251-40d9-458a-92a7-9b2e91dc7359\") " pod="openshift-etcd/installer-1-master-0" Mar 13 10:36:13.501624 master-0 kubenswrapper[7271]: I0313 10:36:13.500360 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00e8e251-40d9-458a-92a7-9b2e91dc7359-kube-api-access\") pod \"installer-1-master-0\" (UID: \"00e8e251-40d9-458a-92a7-9b2e91dc7359\") " pod="openshift-etcd/installer-1-master-0" Mar 13 10:36:13.518680 master-0 kubenswrapper[7271]: I0313 10:36:13.517886 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 10:36:13.761497 master-0 kubenswrapper[7271]: I0313 10:36:13.761384 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 13 10:36:13.773214 master-0 kubenswrapper[7271]: W0313 10:36:13.773144 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod00e8e251_40d9_458a_92a7_9b2e91dc7359.slice/crio-b45c64d6449de0fbb67e8c6c87b585367854c2872ab4281e8171784f28b9d333 WatchSource:0}: Error finding container b45c64d6449de0fbb67e8c6c87b585367854c2872ab4281e8171784f28b9d333: Status 404 returned error can't find the container with id b45c64d6449de0fbb67e8c6c87b585367854c2872ab4281e8171784f28b9d333 Mar 13 10:36:13.951642 master-0 kubenswrapper[7271]: I0313 10:36:13.948604 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"00e8e251-40d9-458a-92a7-9b2e91dc7359","Type":"ContainerStarted","Data":"b45c64d6449de0fbb67e8c6c87b585367854c2872ab4281e8171784f28b9d333"} Mar 13 10:36:13.960618 master-0 kubenswrapper[7271]: I0313 10:36:13.958808 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" event={"ID":"549bd192-0235-4994-b485-f1b10d16f6b5","Type":"ContainerStarted","Data":"271da4cc5b20956051ed1d7f97405260dffc34901d137d8e75b3c407349229eb"} Mar 13 10:36:14.077413 master-0 kubenswrapper[7271]: I0313 10:36:14.077303 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" podStartSLOduration=2.077277644 podStartE2EDuration="2.077277644s" podCreationTimestamp="2026-03-13 10:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:36:14.076562235 +0000 UTC m=+28.603384635" watchObservedRunningTime="2026-03-13 10:36:14.077277644 +0000 UTC m=+28.604100044" Mar 13 10:36:14.690020 master-0 kubenswrapper[7271]: I0313 10:36:14.689659 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:36:14.690020 master-0 kubenswrapper[7271]: I0313 10:36:14.690008 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:36:14.690020 master-0 kubenswrapper[7271]: E0313 10:36:14.689833 7271 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:36:14.691046 master-0 kubenswrapper[7271]: E0313 10:36:14.690112 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca podName:f7bcec3b-b3d9-432d-96b5-ba61d11ab010 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:30.690089275 +0000 UTC m=+45.216911665 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca") pod "controller-manager-7c5b48d77b-g5f7k" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010") : configmap "client-ca" not found Mar 13 10:36:14.691046 master-0 kubenswrapper[7271]: E0313 10:36:14.690164 7271 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:36:14.691046 master-0 kubenswrapper[7271]: E0313 10:36:14.690230 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert podName:f7bcec3b-b3d9-432d-96b5-ba61d11ab010 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:30.690214258 +0000 UTC m=+45.217036648 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert") pod "controller-manager-7c5b48d77b-g5f7k" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010") : secret "serving-cert" not found Mar 13 10:36:14.966024 master-0 kubenswrapper[7271]: I0313 10:36:14.965848 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"00e8e251-40d9-458a-92a7-9b2e91dc7359","Type":"ContainerStarted","Data":"ff391d9c59813842d72b9912aea0684a5fa08ec853cdfa9eb1e377087c9747df"} Mar 13 10:36:17.644673 master-0 kubenswrapper[7271]: I0313 10:36:17.644202 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=4.644183033 podStartE2EDuration="4.644183033s" podCreationTimestamp="2026-03-13 10:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:36:17.639575269 +0000 UTC m=+32.166397669" watchObservedRunningTime="2026-03-13 10:36:17.644183033 +0000 UTC m=+32.171005423" Mar 13 10:36:18.443848 master-0 kubenswrapper[7271]: I0313 10:36:18.443749 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:36:18.443848 master-0 kubenswrapper[7271]: I0313 10:36:18.443839 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:36:18.444230 master-0 kubenswrapper[7271]: E0313 10:36:18.444022 7271 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 10:36:18.444230 master-0 kubenswrapper[7271]: E0313 10:36:18.444144 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls podName:7667717b-fb74-456b-8615-16475cb69e98 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:50.444122955 +0000 UTC m=+64.970945345 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls") pod "ingress-operator-677db989d6-tzd9b" (UID: "7667717b-fb74-456b-8615-16475cb69e98") : secret "metrics-tls" not found Mar 13 10:36:18.444230 master-0 kubenswrapper[7271]: I0313 10:36:18.444145 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:36:18.444420 master-0 kubenswrapper[7271]: E0313 10:36:18.444280 7271 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 10:36:18.444420 master-0 kubenswrapper[7271]: E0313 10:36:18.444363 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls podName:4d5479f3-51ec-4b93-8188-21cdda44828d nodeName:}" failed. No retries permitted until 2026-03-13 10:36:50.444341931 +0000 UTC m=+64.971164491 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-vk9qz" (UID: "4d5479f3-51ec-4b93-8188-21cdda44828d") : secret "cluster-monitoring-operator-tls" not found Mar 13 10:36:18.444420 master-0 kubenswrapper[7271]: I0313 10:36:18.444280 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:36:18.444512 master-0 kubenswrapper[7271]: I0313 10:36:18.444463 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:36:18.444512 master-0 kubenswrapper[7271]: I0313 10:36:18.444498 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:36:18.445130 master-0 kubenswrapper[7271]: I0313 10:36:18.444824 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:36:18.445130 master-0 kubenswrapper[7271]: E0313 10:36:18.444866 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 10:36:18.445130 master-0 kubenswrapper[7271]: E0313 10:36:18.444909 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert podName:c455a959-d764-4b4f-a1e0-95c27495dd9d nodeName:}" failed. No retries permitted until 2026-03-13 10:36:50.444892406 +0000 UTC m=+64.971714796 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert") pod "catalog-operator-7d9c49f57b-2j5jl" (UID: "c455a959-d764-4b4f-a1e0-95c27495dd9d") : secret "catalog-operator-serving-cert" not found Mar 13 10:36:18.445130 master-0 kubenswrapper[7271]: I0313 10:36:18.444898 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:36:18.445130 master-0 kubenswrapper[7271]: E0313 10:36:18.445015 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 10:36:18.445130 master-0 kubenswrapper[7271]: E0313 10:36:18.445101 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert podName:2afe3890-e844-4dd3-ba49-3ac9178549bf nodeName:}" failed. No retries permitted until 2026-03-13 10:36:50.445078911 +0000 UTC m=+64.971901301 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert") pod "olm-operator-d64cfc9db-rsl2h" (UID: "2afe3890-e844-4dd3-ba49-3ac9178549bf") : secret "olm-operator-serving-cert" not found Mar 13 10:36:18.451907 master-0 kubenswrapper[7271]: I0313 10:36:18.451847 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:36:18.452502 master-0 kubenswrapper[7271]: I0313 10:36:18.452443 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:36:18.452646 master-0 kubenswrapper[7271]: I0313 10:36:18.452608 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:36:18.453204 master-0 kubenswrapper[7271]: I0313 10:36:18.453146 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:36:18.546700 master-0 kubenswrapper[7271]: I0313 10:36:18.546408 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:36:18.546700 master-0 kubenswrapper[7271]: E0313 10:36:18.546679 7271 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 10:36:18.547342 master-0 kubenswrapper[7271]: I0313 10:36:18.546741 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:36:18.547342 master-0 kubenswrapper[7271]: E0313 10:36:18.546776 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs podName:79bb87a4-8834-4c73-834e-356ccc1f7f9b nodeName:}" failed. No retries permitted until 2026-03-13 10:36:50.546751256 +0000 UTC m=+65.073573666 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs") pod "network-metrics-daemon-jz2lp" (UID: "79bb87a4-8834-4c73-834e-356ccc1f7f9b") : secret "metrics-daemon-secret" not found Mar 13 10:36:18.547342 master-0 kubenswrapper[7271]: E0313 10:36:18.546848 7271 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 10:36:18.547342 master-0 kubenswrapper[7271]: E0313 10:36:18.546882 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert podName:8a305f45-8689-45a8-8c8b-5954f2c863df nodeName:}" failed. No retries permitted until 2026-03-13 10:36:50.546870509 +0000 UTC m=+65.073692919 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-d5b45" (UID: "8a305f45-8689-45a8-8c8b-5954f2c863df") : secret "package-server-manager-serving-cert" not found Mar 13 10:36:18.547342 master-0 kubenswrapper[7271]: I0313 10:36:18.546879 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:36:18.547342 master-0 kubenswrapper[7271]: E0313 10:36:18.546972 7271 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 10:36:18.547342 master-0 kubenswrapper[7271]: I0313 10:36:18.546990 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:36:18.547342 master-0 kubenswrapper[7271]: E0313 10:36:18.547027 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics podName:66f49a19-0e3b-4611-b8a6-5f5687fa20b6 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:50.547009222 +0000 UTC m=+65.073831642 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-85x6d" (UID: "66f49a19-0e3b-4611-b8a6-5f5687fa20b6") : secret "marketplace-operator-metrics" not found Mar 13 10:36:18.547342 master-0 kubenswrapper[7271]: I0313 10:36:18.547053 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:36:18.547794 master-0 kubenswrapper[7271]: E0313 10:36:18.547537 7271 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 10:36:18.547794 master-0 kubenswrapper[7271]: E0313 10:36:18.547713 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs podName:95339220-324d-45e7-bdc2-e4f42fbd1d32 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:50.54767168 +0000 UTC m=+65.074494260 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs") pod "multus-admission-controller-8d675b596-d787l" (UID: "95339220-324d-45e7-bdc2-e4f42fbd1d32") : secret "multus-admission-controller-secret" not found Mar 13 10:36:18.551854 master-0 kubenswrapper[7271]: I0313 10:36:18.551795 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"cluster-version-operator-745944c6b7-s6k7z\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:36:18.697321 master-0 kubenswrapper[7271]: I0313 10:36:18.697124 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:36:18.702813 master-0 kubenswrapper[7271]: I0313 10:36:18.702771 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:36:18.702895 master-0 kubenswrapper[7271]: I0313 10:36:18.702844 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:36:18.709062 master-0 kubenswrapper[7271]: I0313 10:36:18.708996 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:36:18.736485 master-0 kubenswrapper[7271]: W0313 10:36:18.736426 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aaf36b4_e556_4723_a624_aa2edc69c301.slice/crio-cac6a5bd74eeb0c84d43669700e24c08a9a36b2d9ebb626bfd8e78bd9a500c83 WatchSource:0}: Error finding container cac6a5bd74eeb0c84d43669700e24c08a9a36b2d9ebb626bfd8e78bd9a500c83: Status 404 returned error can't find the container with id cac6a5bd74eeb0c84d43669700e24c08a9a36b2d9ebb626bfd8e78bd9a500c83 Mar 13 10:36:18.982509 master-0 kubenswrapper[7271]: I0313 10:36:18.982345 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" event={"ID":"4aaf36b4-e556-4723-a624-aa2edc69c301","Type":"ContainerStarted","Data":"cac6a5bd74eeb0c84d43669700e24c08a9a36b2d9ebb626bfd8e78bd9a500c83"} Mar 13 10:36:22.898257 master-0 kubenswrapper[7271]: I0313 10:36:22.895689 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc"] Mar 13 10:36:22.919631 master-0 kubenswrapper[7271]: I0313 10:36:22.917422 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-wjrpm"] Mar 13 10:36:22.919631 master-0 kubenswrapper[7271]: I0313 10:36:22.917477 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4"] Mar 13 10:36:22.959575 master-0 kubenswrapper[7271]: W0313 10:36:22.959458 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ff2ab1c_7057_4e18_8e32_68807f86532a.slice/crio-9d325882d9051ed4dcae015a4e37aab1ae44cf25e837fbca8dbbfcfc4d9934e5 WatchSource:0}: Error finding container 9d325882d9051ed4dcae015a4e37aab1ae44cf25e837fbca8dbbfcfc4d9934e5: Status 404 returned error can't find the container with id 9d325882d9051ed4dcae015a4e37aab1ae44cf25e837fbca8dbbfcfc4d9934e5 Mar 13 10:36:23.012611 master-0 kubenswrapper[7271]: I0313 10:36:23.011725 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" event={"ID":"8cf9326b-bc23-45c2-82c4-9c08c739ac5a","Type":"ContainerStarted","Data":"50614fe1bae99eef2fccbbf06f52ab65208692c910cfe5fe3711fe68d7b32786"} Mar 13 10:36:23.016042 master-0 kubenswrapper[7271]: I0313 10:36:23.013110 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" event={"ID":"3ff2ab1c-7057-4e18-8e32-68807f86532a","Type":"ContainerStarted","Data":"9d325882d9051ed4dcae015a4e37aab1ae44cf25e837fbca8dbbfcfc4d9934e5"} Mar 13 10:36:23.016042 master-0 kubenswrapper[7271]: I0313 10:36:23.013666 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" event={"ID":"42b4d53c-af72-44c8-9605-271445f95f87","Type":"ContainerStarted","Data":"13a004f2f44b204dd23b4531ea2ef3d4457cfe84fd8fdc544d2f9015f5747d61"} Mar 13 10:36:23.251635 master-0 kubenswrapper[7271]: I0313 10:36:23.251448 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-778fb45b4-65f7b"] Mar 13 10:36:23.252497 master-0 kubenswrapper[7271]: I0313 10:36:23.252466 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.256471 master-0 kubenswrapper[7271]: I0313 10:36:23.256396 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 10:36:23.256801 master-0 kubenswrapper[7271]: I0313 10:36:23.256783 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 10:36:23.256970 master-0 kubenswrapper[7271]: I0313 10:36:23.256792 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 10:36:23.262261 master-0 kubenswrapper[7271]: I0313 10:36:23.261804 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 10:36:23.262261 master-0 kubenswrapper[7271]: I0313 10:36:23.262192 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 10:36:23.262507 master-0 kubenswrapper[7271]: I0313 10:36:23.262481 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 10:36:23.264239 master-0 kubenswrapper[7271]: I0313 10:36:23.262644 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 10:36:23.264239 master-0 kubenswrapper[7271]: I0313 10:36:23.262780 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 10:36:23.391324 master-0 kubenswrapper[7271]: I0313 10:36:23.391257 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 10:36:23.391745 master-0 kubenswrapper[7271]: I0313 10:36:23.391652 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf" containerName="installer" containerID="cri-o://d84eecae63542ba948d53d567f42cad7dd26e9b2bfc0e6b741cc53afc3e9e71f" gracePeriod=30 Mar 13 10:36:23.392454 master-0 kubenswrapper[7271]: I0313 10:36:23.392396 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-778fb45b4-65f7b"] Mar 13 10:36:23.454280 master-0 kubenswrapper[7271]: I0313 10:36:23.453649 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.454280 master-0 kubenswrapper[7271]: I0313 10:36:23.453718 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-etcd-serving-ca\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.454280 master-0 kubenswrapper[7271]: I0313 10:36:23.453778 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-audit-dir\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.454280 master-0 kubenswrapper[7271]: I0313 10:36:23.453830 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-audit-policies\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.454280 master-0 kubenswrapper[7271]: I0313 10:36:23.453872 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-etcd-client\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.454280 master-0 kubenswrapper[7271]: I0313 10:36:23.453897 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-encryption-config\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.454280 master-0 kubenswrapper[7271]: I0313 10:36:23.453917 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-trusted-ca-bundle\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.454280 master-0 kubenswrapper[7271]: I0313 10:36:23.453964 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdgld\" (UniqueName: \"kubernetes.io/projected/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-kube-api-access-tdgld\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.554982 master-0 kubenswrapper[7271]: I0313 10:36:23.554916 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.554982 master-0 kubenswrapper[7271]: I0313 10:36:23.554989 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-etcd-serving-ca\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.555277 master-0 kubenswrapper[7271]: I0313 10:36:23.555020 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-audit-dir\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.555277 master-0 kubenswrapper[7271]: E0313 10:36:23.555051 7271 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 13 10:36:23.555277 master-0 kubenswrapper[7271]: E0313 10:36:23.555127 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert podName:4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b nodeName:}" failed. No retries permitted until 2026-03-13 10:36:24.055104315 +0000 UTC m=+38.581926705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert") pod "apiserver-778fb45b4-65f7b" (UID: "4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b") : secret "serving-cert" not found Mar 13 10:36:23.555277 master-0 kubenswrapper[7271]: I0313 10:36:23.555054 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-audit-policies\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.555277 master-0 kubenswrapper[7271]: I0313 10:36:23.555184 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-etcd-client\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.555277 master-0 kubenswrapper[7271]: I0313 10:36:23.555212 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-encryption-config\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.555277 master-0 kubenswrapper[7271]: I0313 10:36:23.555230 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-trusted-ca-bundle\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.555277 master-0 kubenswrapper[7271]: I0313 10:36:23.555277 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdgld\" (UniqueName: \"kubernetes.io/projected/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-kube-api-access-tdgld\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.557540 master-0 kubenswrapper[7271]: I0313 10:36:23.555967 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-audit-dir\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.557540 master-0 kubenswrapper[7271]: I0313 10:36:23.556039 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-audit-policies\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.557540 master-0 kubenswrapper[7271]: I0313 10:36:23.556093 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-etcd-serving-ca\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.557540 master-0 kubenswrapper[7271]: I0313 10:36:23.556655 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-trusted-ca-bundle\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.563355 master-0 kubenswrapper[7271]: I0313 10:36:23.562982 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-encryption-config\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.563355 master-0 kubenswrapper[7271]: I0313 10:36:23.562986 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-etcd-client\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.666757 master-0 kubenswrapper[7271]: I0313 10:36:23.664667 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-6467bdd544-n9745"] Mar 13 10:36:23.666757 master-0 kubenswrapper[7271]: I0313 10:36:23.665634 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.673615 master-0 kubenswrapper[7271]: I0313 10:36:23.672227 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 13 10:36:23.673615 master-0 kubenswrapper[7271]: I0313 10:36:23.672238 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 13 10:36:23.673615 master-0 kubenswrapper[7271]: I0313 10:36:23.672397 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 10:36:23.673615 master-0 kubenswrapper[7271]: I0313 10:36:23.672525 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 10:36:23.673615 master-0 kubenswrapper[7271]: I0313 10:36:23.672546 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 10:36:23.673615 master-0 kubenswrapper[7271]: I0313 10:36:23.672405 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 10:36:23.673615 master-0 kubenswrapper[7271]: I0313 10:36:23.672691 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 10:36:23.673615 master-0 kubenswrapper[7271]: I0313 10:36:23.672763 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 10:36:23.673615 master-0 kubenswrapper[7271]: I0313 10:36:23.673414 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 10:36:23.681771 master-0 kubenswrapper[7271]: I0313 10:36:23.681152 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 10:36:23.801623 master-0 kubenswrapper[7271]: I0313 10:36:23.794704 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6467bdd544-n9745"] Mar 13 10:36:23.828709 master-0 kubenswrapper[7271]: I0313 10:36:23.819462 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdgld\" (UniqueName: \"kubernetes.io/projected/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-kube-api-access-tdgld\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:23.859963 master-0 kubenswrapper[7271]: I0313 10:36:23.859473 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-trusted-ca-bundle\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.859963 master-0 kubenswrapper[7271]: I0313 10:36:23.859604 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-etcd-serving-ca\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.859963 master-0 kubenswrapper[7271]: I0313 10:36:23.859641 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn528\" (UniqueName: \"kubernetes.io/projected/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-kube-api-access-dn528\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.859963 master-0 kubenswrapper[7271]: I0313 10:36:23.859675 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-etcd-client\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.859963 master-0 kubenswrapper[7271]: I0313 10:36:23.859698 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-serving-cert\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.859963 master-0 kubenswrapper[7271]: I0313 10:36:23.859755 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-node-pullsecrets\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.859963 master-0 kubenswrapper[7271]: I0313 10:36:23.859782 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.859963 master-0 kubenswrapper[7271]: I0313 10:36:23.859924 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit-dir\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.860521 master-0 kubenswrapper[7271]: I0313 10:36:23.860403 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-encryption-config\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.860521 master-0 kubenswrapper[7271]: I0313 10:36:23.860439 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-config\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.860521 master-0 kubenswrapper[7271]: I0313 10:36:23.860459 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-image-import-ca\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.961738 master-0 kubenswrapper[7271]: I0313 10:36:23.961675 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.962544 master-0 kubenswrapper[7271]: I0313 10:36:23.961789 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit-dir\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.962544 master-0 kubenswrapper[7271]: E0313 10:36:23.961928 7271 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 10:36:23.962544 master-0 kubenswrapper[7271]: E0313 10:36:23.962045 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit podName:e2c0a472-ddf3-4b48-a431-c38a6c5130ed nodeName:}" failed. No retries permitted until 2026-03-13 10:36:24.462017449 +0000 UTC m=+38.988839839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit") pod "apiserver-6467bdd544-n9745" (UID: "e2c0a472-ddf3-4b48-a431-c38a6c5130ed") : configmap "audit-0" not found Mar 13 10:36:23.962544 master-0 kubenswrapper[7271]: I0313 10:36:23.962138 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-encryption-config\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.962544 master-0 kubenswrapper[7271]: I0313 10:36:23.962181 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-config\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.962544 master-0 kubenswrapper[7271]: I0313 10:36:23.962209 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-image-import-ca\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.962544 master-0 kubenswrapper[7271]: I0313 10:36:23.962235 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-trusted-ca-bundle\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.962544 master-0 kubenswrapper[7271]: I0313 10:36:23.962286 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-etcd-serving-ca\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.962544 master-0 kubenswrapper[7271]: I0313 10:36:23.962314 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn528\" (UniqueName: \"kubernetes.io/projected/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-kube-api-access-dn528\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.962544 master-0 kubenswrapper[7271]: I0313 10:36:23.962349 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-etcd-client\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.962544 master-0 kubenswrapper[7271]: I0313 10:36:23.962376 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-serving-cert\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.962544 master-0 kubenswrapper[7271]: I0313 10:36:23.962443 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-node-pullsecrets\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.962544 master-0 kubenswrapper[7271]: I0313 10:36:23.962543 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-node-pullsecrets\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.963079 master-0 kubenswrapper[7271]: I0313 10:36:23.962601 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit-dir\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.963867 master-0 kubenswrapper[7271]: I0313 10:36:23.963678 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-image-import-ca\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.964064 master-0 kubenswrapper[7271]: I0313 10:36:23.964029 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-etcd-serving-ca\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.964606 master-0 kubenswrapper[7271]: I0313 10:36:23.964541 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-config\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.964853 master-0 kubenswrapper[7271]: I0313 10:36:23.964826 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-trusted-ca-bundle\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.966960 master-0 kubenswrapper[7271]: I0313 10:36:23.966902 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-serving-cert\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.967045 master-0 kubenswrapper[7271]: I0313 10:36:23.966979 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-etcd-client\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:23.967471 master-0 kubenswrapper[7271]: I0313 10:36:23.967446 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-encryption-config\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:24.049975 master-0 kubenswrapper[7271]: I0313 10:36:24.049902 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn528\" (UniqueName: \"kubernetes.io/projected/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-kube-api-access-dn528\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:24.070673 master-0 kubenswrapper[7271]: I0313 10:36:24.064492 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:24.070673 master-0 kubenswrapper[7271]: E0313 10:36:24.066784 7271 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 13 10:36:24.070673 master-0 kubenswrapper[7271]: E0313 10:36:24.066884 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert podName:4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b nodeName:}" failed. No retries permitted until 2026-03-13 10:36:25.066858879 +0000 UTC m=+39.593681459 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert") pod "apiserver-778fb45b4-65f7b" (UID: "4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b") : secret "serving-cert" not found Mar 13 10:36:24.173999 master-0 kubenswrapper[7271]: I0313 10:36:24.173683 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd"] Mar 13 10:36:24.181619 master-0 kubenswrapper[7271]: I0313 10:36:24.179351 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.192383 master-0 kubenswrapper[7271]: I0313 10:36:24.184683 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd"] Mar 13 10:36:24.206553 master-0 kubenswrapper[7271]: I0313 10:36:24.202829 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 13 10:36:24.206553 master-0 kubenswrapper[7271]: I0313 10:36:24.203173 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 13 10:36:24.206553 master-0 kubenswrapper[7271]: I0313 10:36:24.204239 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 13 10:36:24.206553 master-0 kubenswrapper[7271]: I0313 10:36:24.204572 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 13 10:36:24.268981 master-0 kubenswrapper[7271]: I0313 10:36:24.268874 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/257a4a8b-014c-4473-80a0-e95cf6d41bf1-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.269251 master-0 kubenswrapper[7271]: I0313 10:36:24.269136 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/257a4a8b-014c-4473-80a0-e95cf6d41bf1-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.269477 master-0 kubenswrapper[7271]: I0313 10:36:24.269421 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzv5v\" (UniqueName: \"kubernetes.io/projected/257a4a8b-014c-4473-80a0-e95cf6d41bf1-kube-api-access-hzv5v\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.269538 master-0 kubenswrapper[7271]: I0313 10:36:24.269500 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/257a4a8b-014c-4473-80a0-e95cf6d41bf1-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.269657 master-0 kubenswrapper[7271]: I0313 10:36:24.269625 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/257a4a8b-014c-4473-80a0-e95cf6d41bf1-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.269801 master-0 kubenswrapper[7271]: I0313 10:36:24.269756 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.371055 master-0 kubenswrapper[7271]: I0313 10:36:24.370893 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.371055 master-0 kubenswrapper[7271]: I0313 10:36:24.370982 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/257a4a8b-014c-4473-80a0-e95cf6d41bf1-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.371055 master-0 kubenswrapper[7271]: I0313 10:36:24.371054 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/257a4a8b-014c-4473-80a0-e95cf6d41bf1-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.371055 master-0 kubenswrapper[7271]: E0313 10:36:24.371066 7271 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 13 10:36:24.371470 master-0 kubenswrapper[7271]: E0313 10:36:24.371162 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs podName:257a4a8b-014c-4473-80a0-e95cf6d41bf1 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:24.871136582 +0000 UTC m=+39.397959152 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs") pod "catalogd-controller-manager-7f8b8b6f4c-f46qd" (UID: "257a4a8b-014c-4473-80a0-e95cf6d41bf1") : secret "catalogserver-cert" not found Mar 13 10:36:24.371470 master-0 kubenswrapper[7271]: I0313 10:36:24.371166 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/257a4a8b-014c-4473-80a0-e95cf6d41bf1-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.371470 master-0 kubenswrapper[7271]: I0313 10:36:24.371395 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzv5v\" (UniqueName: \"kubernetes.io/projected/257a4a8b-014c-4473-80a0-e95cf6d41bf1-kube-api-access-hzv5v\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.371470 master-0 kubenswrapper[7271]: I0313 10:36:24.371446 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/257a4a8b-014c-4473-80a0-e95cf6d41bf1-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.371733 master-0 kubenswrapper[7271]: I0313 10:36:24.371521 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/257a4a8b-014c-4473-80a0-e95cf6d41bf1-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.371733 master-0 kubenswrapper[7271]: I0313 10:36:24.371698 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/257a4a8b-014c-4473-80a0-e95cf6d41bf1-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.371733 master-0 kubenswrapper[7271]: I0313 10:36:24.371699 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/257a4a8b-014c-4473-80a0-e95cf6d41bf1-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.417223 master-0 kubenswrapper[7271]: I0313 10:36:24.400749 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/257a4a8b-014c-4473-80a0-e95cf6d41bf1-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.473697 master-0 kubenswrapper[7271]: I0313 10:36:24.472428 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:24.473697 master-0 kubenswrapper[7271]: E0313 10:36:24.472579 7271 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 10:36:24.473697 master-0 kubenswrapper[7271]: E0313 10:36:24.472652 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit podName:e2c0a472-ddf3-4b48-a431-c38a6c5130ed nodeName:}" failed. No retries permitted until 2026-03-13 10:36:25.472634992 +0000 UTC m=+39.999457382 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit") pod "apiserver-6467bdd544-n9745" (UID: "e2c0a472-ddf3-4b48-a431-c38a6c5130ed") : configmap "audit-0" not found Mar 13 10:36:24.543759 master-0 kubenswrapper[7271]: I0313 10:36:24.543713 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzv5v\" (UniqueName: \"kubernetes.io/projected/257a4a8b-014c-4473-80a0-e95cf6d41bf1-kube-api-access-hzv5v\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.877998 master-0 kubenswrapper[7271]: I0313 10:36:24.877931 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:24.878279 master-0 kubenswrapper[7271]: E0313 10:36:24.878168 7271 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 13 10:36:24.878279 master-0 kubenswrapper[7271]: E0313 10:36:24.878274 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs podName:257a4a8b-014c-4473-80a0-e95cf6d41bf1 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:25.87824922 +0000 UTC m=+40.405071800 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs") pod "catalogd-controller-manager-7f8b8b6f4c-f46qd" (UID: "257a4a8b-014c-4473-80a0-e95cf6d41bf1") : secret "catalogserver-cert" not found Mar 13 10:36:24.978289 master-0 kubenswrapper[7271]: I0313 10:36:24.978221 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf"] Mar 13 10:36:24.979096 master-0 kubenswrapper[7271]: I0313 10:36:24.979077 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:24.981223 master-0 kubenswrapper[7271]: I0313 10:36:24.981153 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 13 10:36:24.981502 master-0 kubenswrapper[7271]: I0313 10:36:24.981475 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 13 10:36:24.981647 master-0 kubenswrapper[7271]: I0313 10:36:24.981635 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 13 10:36:25.080813 master-0 kubenswrapper[7271]: I0313 10:36:25.080743 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:25.081093 master-0 kubenswrapper[7271]: E0313 10:36:25.080920 7271 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 13 10:36:25.081093 master-0 kubenswrapper[7271]: E0313 10:36:25.080982 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert podName:4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b nodeName:}" failed. No retries permitted until 2026-03-13 10:36:27.080963262 +0000 UTC m=+41.607785652 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert") pod "apiserver-778fb45b4-65f7b" (UID: "4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b") : secret "serving-cert" not found Mar 13 10:36:25.082294 master-0 kubenswrapper[7271]: I0313 10:36:25.082254 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf"] Mar 13 10:36:25.182624 master-0 kubenswrapper[7271]: I0313 10:36:25.182387 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b10584c2-ef04-4649-bcb6-9222c9530c3f-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.182624 master-0 kubenswrapper[7271]: I0313 10:36:25.182450 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b10584c2-ef04-4649-bcb6-9222c9530c3f-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.182624 master-0 kubenswrapper[7271]: I0313 10:36:25.182476 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsswm\" (UniqueName: \"kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-kube-api-access-zsswm\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.182624 master-0 kubenswrapper[7271]: I0313 10:36:25.182567 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b10584c2-ef04-4649-bcb6-9222c9530c3f-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.183041 master-0 kubenswrapper[7271]: I0313 10:36:25.182756 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.283488 master-0 kubenswrapper[7271]: I0313 10:36:25.283418 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b10584c2-ef04-4649-bcb6-9222c9530c3f-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.283938 master-0 kubenswrapper[7271]: I0313 10:36:25.283885 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.284032 master-0 kubenswrapper[7271]: I0313 10:36:25.284002 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b10584c2-ef04-4649-bcb6-9222c9530c3f-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.284081 master-0 kubenswrapper[7271]: I0313 10:36:25.284055 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b10584c2-ef04-4649-bcb6-9222c9530c3f-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.284081 master-0 kubenswrapper[7271]: E0313 10:36:25.284070 7271 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap "operator-controller-trusted-ca-bundle" not found Mar 13 10:36:25.284169 master-0 kubenswrapper[7271]: I0313 10:36:25.284086 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsswm\" (UniqueName: \"kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-kube-api-access-zsswm\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.284169 master-0 kubenswrapper[7271]: E0313 10:36:25.284103 7271 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf: configmap "operator-controller-trusted-ca-bundle" not found Mar 13 10:36:25.284169 master-0 kubenswrapper[7271]: I0313 10:36:25.284117 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b10584c2-ef04-4649-bcb6-9222c9530c3f-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.284287 master-0 kubenswrapper[7271]: E0313 10:36:25.284191 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-ca-certs podName:b10584c2-ef04-4649-bcb6-9222c9530c3f nodeName:}" failed. No retries permitted until 2026-03-13 10:36:25.784170637 +0000 UTC m=+40.310993197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-ca-certs") pod "operator-controller-controller-manager-6598bfb6c4-bg6zf" (UID: "b10584c2-ef04-4649-bcb6-9222c9530c3f") : configmap "operator-controller-trusted-ca-bundle" not found Mar 13 10:36:25.284338 master-0 kubenswrapper[7271]: I0313 10:36:25.284267 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b10584c2-ef04-4649-bcb6-9222c9530c3f-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.284415 master-0 kubenswrapper[7271]: I0313 10:36:25.284367 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b10584c2-ef04-4649-bcb6-9222c9530c3f-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.421266 master-0 kubenswrapper[7271]: I0313 10:36:25.421157 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsswm\" (UniqueName: \"kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-kube-api-access-zsswm\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.487341 master-0 kubenswrapper[7271]: I0313 10:36:25.487113 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:25.487574 master-0 kubenswrapper[7271]: E0313 10:36:25.487406 7271 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 10:36:25.487574 master-0 kubenswrapper[7271]: E0313 10:36:25.487517 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit podName:e2c0a472-ddf3-4b48-a431-c38a6c5130ed nodeName:}" failed. No retries permitted until 2026-03-13 10:36:27.487490915 +0000 UTC m=+42.014313505 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit") pod "apiserver-6467bdd544-n9745" (UID: "e2c0a472-ddf3-4b48-a431-c38a6c5130ed") : configmap "audit-0" not found Mar 13 10:36:25.792318 master-0 kubenswrapper[7271]: I0313 10:36:25.792190 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:25.792653 master-0 kubenswrapper[7271]: E0313 10:36:25.792471 7271 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap "operator-controller-trusted-ca-bundle" not found Mar 13 10:36:25.792653 master-0 kubenswrapper[7271]: E0313 10:36:25.792532 7271 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf: configmap "operator-controller-trusted-ca-bundle" not found Mar 13 10:36:25.792653 master-0 kubenswrapper[7271]: E0313 10:36:25.792651 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-ca-certs podName:b10584c2-ef04-4649-bcb6-9222c9530c3f nodeName:}" failed. No retries permitted until 2026-03-13 10:36:26.792624801 +0000 UTC m=+41.319447191 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-ca-certs") pod "operator-controller-controller-manager-6598bfb6c4-bg6zf" (UID: "b10584c2-ef04-4649-bcb6-9222c9530c3f") : configmap "operator-controller-trusted-ca-bundle" not found Mar 13 10:36:25.894287 master-0 kubenswrapper[7271]: I0313 10:36:25.894224 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:25.894613 master-0 kubenswrapper[7271]: E0313 10:36:25.894436 7271 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 13 10:36:25.894613 master-0 kubenswrapper[7271]: E0313 10:36:25.894508 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs podName:257a4a8b-014c-4473-80a0-e95cf6d41bf1 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:27.894486291 +0000 UTC m=+42.421308681 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs") pod "catalogd-controller-manager-7f8b8b6f4c-f46qd" (UID: "257a4a8b-014c-4473-80a0-e95cf6d41bf1") : secret "catalogserver-cert" not found Mar 13 10:36:26.167758 master-0 kubenswrapper[7271]: I0313 10:36:26.161888 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 10:36:26.167758 master-0 kubenswrapper[7271]: I0313 10:36:26.162708 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 10:36:26.203813 master-0 kubenswrapper[7271]: I0313 10:36:26.203743 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-var-lock\") pod \"installer-2-master-0\" (UID: \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 10:36:26.204094 master-0 kubenswrapper[7271]: I0313 10:36:26.203888 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 10:36:26.204255 master-0 kubenswrapper[7271]: I0313 10:36:26.204188 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-kube-api-access\") pod \"installer-2-master-0\" (UID: \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 10:36:26.263618 master-0 kubenswrapper[7271]: I0313 10:36:26.262879 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:36:26.305781 master-0 kubenswrapper[7271]: I0313 10:36:26.305703 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-kube-api-access\") pod \"installer-2-master-0\" (UID: \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 10:36:26.306080 master-0 kubenswrapper[7271]: I0313 10:36:26.305805 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-var-lock\") pod \"installer-2-master-0\" (UID: \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 10:36:26.306080 master-0 kubenswrapper[7271]: I0313 10:36:26.306033 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 10:36:26.306147 master-0 kubenswrapper[7271]: I0313 10:36:26.306123 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 10:36:26.306212 master-0 kubenswrapper[7271]: I0313 10:36:26.306181 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-var-lock\") pod \"installer-2-master-0\" (UID: \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 10:36:26.347611 master-0 kubenswrapper[7271]: I0313 10:36:26.329216 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 10:36:26.427067 master-0 kubenswrapper[7271]: I0313 10:36:26.424478 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-kube-api-access\") pod \"installer-2-master-0\" (UID: \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 10:36:26.494165 master-0 kubenswrapper[7271]: I0313 10:36:26.494042 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 10:36:26.851846 master-0 kubenswrapper[7271]: I0313 10:36:26.819812 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:26.851846 master-0 kubenswrapper[7271]: I0313 10:36:26.824706 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:26.852845 master-0 kubenswrapper[7271]: I0313 10:36:26.852548 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:27.124307 master-0 kubenswrapper[7271]: I0313 10:36:27.124133 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:27.124307 master-0 kubenswrapper[7271]: E0313 10:36:27.124309 7271 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 13 10:36:27.124649 master-0 kubenswrapper[7271]: E0313 10:36:27.124369 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert podName:4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b nodeName:}" failed. No retries permitted until 2026-03-13 10:36:31.124353408 +0000 UTC m=+45.651175798 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert") pod "apiserver-778fb45b4-65f7b" (UID: "4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b") : secret "serving-cert" not found Mar 13 10:36:27.401780 master-0 kubenswrapper[7271]: I0313 10:36:27.401207 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-6467bdd544-n9745"] Mar 13 10:36:27.401780 master-0 kubenswrapper[7271]: E0313 10:36:27.401577 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-6467bdd544-n9745" podUID="e2c0a472-ddf3-4b48-a431-c38a6c5130ed" Mar 13 10:36:27.536520 master-0 kubenswrapper[7271]: I0313 10:36:27.536459 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit\") pod \"apiserver-6467bdd544-n9745\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:27.536776 master-0 kubenswrapper[7271]: E0313 10:36:27.536655 7271 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 10:36:27.536776 master-0 kubenswrapper[7271]: E0313 10:36:27.536735 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit podName:e2c0a472-ddf3-4b48-a431-c38a6c5130ed nodeName:}" failed. No retries permitted until 2026-03-13 10:36:31.536714489 +0000 UTC m=+46.063536869 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit") pod "apiserver-6467bdd544-n9745" (UID: "e2c0a472-ddf3-4b48-a431-c38a6c5130ed") : configmap "audit-0" not found Mar 13 10:36:27.945230 master-0 kubenswrapper[7271]: I0313 10:36:27.945147 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:36:27.945230 master-0 kubenswrapper[7271]: I0313 10:36:27.945240 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:27.945692 master-0 kubenswrapper[7271]: I0313 10:36:27.945330 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca\") pod \"route-controller-manager-fc5589ff-d48hw\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:36:27.945692 master-0 kubenswrapper[7271]: E0313 10:36:27.945449 7271 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:36:27.945692 master-0 kubenswrapper[7271]: E0313 10:36:27.945511 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca podName:8d60570a-069b-43fe-be3e-814955fec7ce nodeName:}" failed. No retries permitted until 2026-03-13 10:36:59.945496113 +0000 UTC m=+74.472318503 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca") pod "route-controller-manager-fc5589ff-d48hw" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce") : configmap "client-ca" not found Mar 13 10:36:27.945952 master-0 kubenswrapper[7271]: E0313 10:36:27.945910 7271 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 13 10:36:27.945952 master-0 kubenswrapper[7271]: E0313 10:36:27.945952 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs podName:257a4a8b-014c-4473-80a0-e95cf6d41bf1 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:31.945943766 +0000 UTC m=+46.472766156 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs") pod "catalogd-controller-manager-7f8b8b6f4c-f46qd" (UID: "257a4a8b-014c-4473-80a0-e95cf6d41bf1") : secret "catalogserver-cert" not found Mar 13 10:36:27.946189 master-0 kubenswrapper[7271]: E0313 10:36:27.946122 7271 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 10:36:27.946301 master-0 kubenswrapper[7271]: E0313 10:36:27.946263 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert podName:8d60570a-069b-43fe-be3e-814955fec7ce nodeName:}" failed. No retries permitted until 2026-03-13 10:36:59.946235303 +0000 UTC m=+74.473057863 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert") pod "route-controller-manager-fc5589ff-d48hw" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce") : secret "serving-cert" not found Mar 13 10:36:28.052745 master-0 kubenswrapper[7271]: I0313 10:36:28.052564 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:28.067682 master-0 kubenswrapper[7271]: I0313 10:36:28.067615 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:28.147900 master-0 kubenswrapper[7271]: I0313 10:36:28.147845 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-etcd-serving-ca\") pod \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " Mar 13 10:36:28.147900 master-0 kubenswrapper[7271]: I0313 10:36:28.147908 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-node-pullsecrets\") pod \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " Mar 13 10:36:28.148193 master-0 kubenswrapper[7271]: I0313 10:36:28.147966 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit-dir\") pod \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " Mar 13 10:36:28.148193 master-0 kubenswrapper[7271]: I0313 10:36:28.148003 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-etcd-client\") pod \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " Mar 13 10:36:28.148193 master-0 kubenswrapper[7271]: I0313 10:36:28.148023 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-config\") pod \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " Mar 13 10:36:28.148193 master-0 kubenswrapper[7271]: I0313 10:36:28.148050 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-trusted-ca-bundle\") pod \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " Mar 13 10:36:28.148193 master-0 kubenswrapper[7271]: I0313 10:36:28.148074 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-serving-cert\") pod \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " Mar 13 10:36:28.148193 master-0 kubenswrapper[7271]: I0313 10:36:28.148067 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e2c0a472-ddf3-4b48-a431-c38a6c5130ed" (UID: "e2c0a472-ddf3-4b48-a431-c38a6c5130ed"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:36:28.148193 master-0 kubenswrapper[7271]: I0313 10:36:28.148117 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn528\" (UniqueName: \"kubernetes.io/projected/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-kube-api-access-dn528\") pod \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " Mar 13 10:36:28.148193 master-0 kubenswrapper[7271]: I0313 10:36:28.148139 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-image-import-ca\") pod \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " Mar 13 10:36:28.148193 master-0 kubenswrapper[7271]: I0313 10:36:28.148067 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e2c0a472-ddf3-4b48-a431-c38a6c5130ed" (UID: "e2c0a472-ddf3-4b48-a431-c38a6c5130ed"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:36:28.148193 master-0 kubenswrapper[7271]: I0313 10:36:28.148159 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-encryption-config\") pod \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\" (UID: \"e2c0a472-ddf3-4b48-a431-c38a6c5130ed\") " Mar 13 10:36:28.148525 master-0 kubenswrapper[7271]: I0313 10:36:28.148495 7271 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:28.148525 master-0 kubenswrapper[7271]: I0313 10:36:28.148514 7271 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:28.148604 master-0 kubenswrapper[7271]: I0313 10:36:28.148541 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "e2c0a472-ddf3-4b48-a431-c38a6c5130ed" (UID: "e2c0a472-ddf3-4b48-a431-c38a6c5130ed"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:36:28.148659 master-0 kubenswrapper[7271]: I0313 10:36:28.148634 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-config" (OuterVolumeSpecName: "config") pod "e2c0a472-ddf3-4b48-a431-c38a6c5130ed" (UID: "e2c0a472-ddf3-4b48-a431-c38a6c5130ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:36:28.149127 master-0 kubenswrapper[7271]: I0313 10:36:28.149101 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "e2c0a472-ddf3-4b48-a431-c38a6c5130ed" (UID: "e2c0a472-ddf3-4b48-a431-c38a6c5130ed"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:36:28.149178 master-0 kubenswrapper[7271]: I0313 10:36:28.149119 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e2c0a472-ddf3-4b48-a431-c38a6c5130ed" (UID: "e2c0a472-ddf3-4b48-a431-c38a6c5130ed"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:36:28.153672 master-0 kubenswrapper[7271]: I0313 10:36:28.153597 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "e2c0a472-ddf3-4b48-a431-c38a6c5130ed" (UID: "e2c0a472-ddf3-4b48-a431-c38a6c5130ed"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:36:28.153672 master-0 kubenswrapper[7271]: I0313 10:36:28.153602 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-kube-api-access-dn528" (OuterVolumeSpecName: "kube-api-access-dn528") pod "e2c0a472-ddf3-4b48-a431-c38a6c5130ed" (UID: "e2c0a472-ddf3-4b48-a431-c38a6c5130ed"). InnerVolumeSpecName "kube-api-access-dn528". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:36:28.153672 master-0 kubenswrapper[7271]: I0313 10:36:28.153616 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e2c0a472-ddf3-4b48-a431-c38a6c5130ed" (UID: "e2c0a472-ddf3-4b48-a431-c38a6c5130ed"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:36:28.154363 master-0 kubenswrapper[7271]: I0313 10:36:28.154326 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "e2c0a472-ddf3-4b48-a431-c38a6c5130ed" (UID: "e2c0a472-ddf3-4b48-a431-c38a6c5130ed"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:36:28.250027 master-0 kubenswrapper[7271]: I0313 10:36:28.249709 7271 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:28.250027 master-0 kubenswrapper[7271]: I0313 10:36:28.249772 7271 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:28.250027 master-0 kubenswrapper[7271]: I0313 10:36:28.249786 7271 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:28.250027 master-0 kubenswrapper[7271]: I0313 10:36:28.249799 7271 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:28.250027 master-0 kubenswrapper[7271]: I0313 10:36:28.249811 7271 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:28.250027 master-0 kubenswrapper[7271]: I0313 10:36:28.249887 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn528\" (UniqueName: \"kubernetes.io/projected/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-kube-api-access-dn528\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:28.250027 master-0 kubenswrapper[7271]: I0313 10:36:28.249904 7271 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:28.250027 master-0 kubenswrapper[7271]: I0313 10:36:28.249915 7271 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:29.056825 master-0 kubenswrapper[7271]: I0313 10:36:29.056779 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6467bdd544-n9745" Mar 13 10:36:29.130448 master-0 kubenswrapper[7271]: I0313 10:36:29.129702 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-65bc99cdf7-7rjbr"] Mar 13 10:36:29.130840 master-0 kubenswrapper[7271]: I0313 10:36:29.130578 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.140423 master-0 kubenswrapper[7271]: I0313 10:36:29.136522 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 10:36:29.140423 master-0 kubenswrapper[7271]: I0313 10:36:29.136707 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 10:36:29.140423 master-0 kubenswrapper[7271]: I0313 10:36:29.136817 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 10:36:29.140423 master-0 kubenswrapper[7271]: I0313 10:36:29.136897 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 10:36:29.140423 master-0 kubenswrapper[7271]: I0313 10:36:29.136553 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 10:36:29.140423 master-0 kubenswrapper[7271]: I0313 10:36:29.137145 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 10:36:29.140423 master-0 kubenswrapper[7271]: I0313 10:36:29.137281 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 10:36:29.140423 master-0 kubenswrapper[7271]: I0313 10:36:29.139514 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 10:36:29.140423 master-0 kubenswrapper[7271]: I0313 10:36:29.139906 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 10:36:29.141437 master-0 kubenswrapper[7271]: I0313 10:36:29.141357 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-6467bdd544-n9745"] Mar 13 10:36:29.146094 master-0 kubenswrapper[7271]: I0313 10:36:29.146051 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 10:36:29.157337 master-0 kubenswrapper[7271]: I0313 10:36:29.152868 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-65bc99cdf7-7rjbr"] Mar 13 10:36:29.159879 master-0 kubenswrapper[7271]: I0313 10:36:29.159818 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-6467bdd544-n9745"] Mar 13 10:36:29.162273 master-0 kubenswrapper[7271]: I0313 10:36:29.162254 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d72d950-cfb4-4ed5-9ad6-f7266b937493-audit-dir\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.162453 master-0 kubenswrapper[7271]: I0313 10:36:29.162427 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-serving-cert\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.162577 master-0 kubenswrapper[7271]: I0313 10:36:29.162559 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-config\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.162702 master-0 kubenswrapper[7271]: I0313 10:36:29.162681 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d72d950-cfb4-4ed5-9ad6-f7266b937493-node-pullsecrets\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.162816 master-0 kubenswrapper[7271]: I0313 10:36:29.162800 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-trusted-ca-bundle\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.163098 master-0 kubenswrapper[7271]: I0313 10:36:29.163023 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-encryption-config\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.163164 master-0 kubenswrapper[7271]: I0313 10:36:29.163124 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-image-import-ca\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.163378 master-0 kubenswrapper[7271]: I0313 10:36:29.163327 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-audit\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.163425 master-0 kubenswrapper[7271]: I0313 10:36:29.163383 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-etcd-client\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.163461 master-0 kubenswrapper[7271]: I0313 10:36:29.163448 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9cbm\" (UniqueName: \"kubernetes.io/projected/1d72d950-cfb4-4ed5-9ad6-f7266b937493-kube-api-access-h9cbm\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.163508 master-0 kubenswrapper[7271]: I0313 10:36:29.163482 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-etcd-serving-ca\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.265080 master-0 kubenswrapper[7271]: I0313 10:36:29.265002 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-trusted-ca-bundle\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.265404 master-0 kubenswrapper[7271]: I0313 10:36:29.265105 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-encryption-config\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.265404 master-0 kubenswrapper[7271]: I0313 10:36:29.265142 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-image-import-ca\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.265404 master-0 kubenswrapper[7271]: I0313 10:36:29.265181 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-audit\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.265404 master-0 kubenswrapper[7271]: I0313 10:36:29.265200 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-etcd-client\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.265404 master-0 kubenswrapper[7271]: I0313 10:36:29.265234 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9cbm\" (UniqueName: \"kubernetes.io/projected/1d72d950-cfb4-4ed5-9ad6-f7266b937493-kube-api-access-h9cbm\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.265404 master-0 kubenswrapper[7271]: I0313 10:36:29.265251 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-etcd-serving-ca\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.265404 master-0 kubenswrapper[7271]: I0313 10:36:29.265306 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d72d950-cfb4-4ed5-9ad6-f7266b937493-audit-dir\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.265404 master-0 kubenswrapper[7271]: I0313 10:36:29.265345 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-serving-cert\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.265404 master-0 kubenswrapper[7271]: I0313 10:36:29.265366 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d72d950-cfb4-4ed5-9ad6-f7266b937493-node-pullsecrets\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.265404 master-0 kubenswrapper[7271]: I0313 10:36:29.265383 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-config\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.265781 master-0 kubenswrapper[7271]: I0313 10:36:29.265423 7271 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e2c0a472-ddf3-4b48-a431-c38a6c5130ed-audit\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:29.266215 master-0 kubenswrapper[7271]: I0313 10:36:29.266188 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-config\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.267144 master-0 kubenswrapper[7271]: I0313 10:36:29.267021 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d72d950-cfb4-4ed5-9ad6-f7266b937493-audit-dir\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.267220 master-0 kubenswrapper[7271]: I0313 10:36:29.267182 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d72d950-cfb4-4ed5-9ad6-f7266b937493-node-pullsecrets\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.267646 master-0 kubenswrapper[7271]: I0313 10:36:29.267571 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-trusted-ca-bundle\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.267929 master-0 kubenswrapper[7271]: I0313 10:36:29.267883 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-image-import-ca\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.267991 master-0 kubenswrapper[7271]: I0313 10:36:29.267957 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-etcd-serving-ca\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.268336 master-0 kubenswrapper[7271]: I0313 10:36:29.268305 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-audit\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.271431 master-0 kubenswrapper[7271]: I0313 10:36:29.271393 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-encryption-config\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.271533 master-0 kubenswrapper[7271]: I0313 10:36:29.271499 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-etcd-client\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.271533 master-0 kubenswrapper[7271]: I0313 10:36:29.271488 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-serving-cert\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.285985 master-0 kubenswrapper[7271]: I0313 10:36:29.285942 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9cbm\" (UniqueName: \"kubernetes.io/projected/1d72d950-cfb4-4ed5-9ad6-f7266b937493-kube-api-access-h9cbm\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.458106 master-0 kubenswrapper[7271]: I0313 10:36:29.457939 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:29.651627 master-0 kubenswrapper[7271]: I0313 10:36:29.651552 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2c0a472-ddf3-4b48-a431-c38a6c5130ed" path="/var/lib/kubelet/pods/e2c0a472-ddf3-4b48-a431-c38a6c5130ed/volumes" Mar 13 10:36:30.788377 master-0 kubenswrapper[7271]: I0313 10:36:30.786965 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:36:30.788377 master-0 kubenswrapper[7271]: I0313 10:36:30.787013 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:36:30.788377 master-0 kubenswrapper[7271]: E0313 10:36:30.787192 7271 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 10:36:30.788377 master-0 kubenswrapper[7271]: E0313 10:36:30.787303 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca podName:f7bcec3b-b3d9-432d-96b5-ba61d11ab010 nodeName:}" failed. No retries permitted until 2026-03-13 10:37:02.78727525 +0000 UTC m=+77.314097640 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca") pod "controller-manager-7c5b48d77b-g5f7k" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010") : configmap "client-ca" not found Mar 13 10:36:30.793141 master-0 kubenswrapper[7271]: I0313 10:36:30.793086 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert\") pod \"controller-manager-7c5b48d77b-g5f7k\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:36:31.193036 master-0 kubenswrapper[7271]: I0313 10:36:31.192976 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:31.197208 master-0 kubenswrapper[7271]: I0313 10:36:31.197169 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:31.395481 master-0 kubenswrapper[7271]: I0313 10:36:31.394485 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:31.552777 master-0 kubenswrapper[7271]: I0313 10:36:31.552003 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 10:36:31.859232 master-0 kubenswrapper[7271]: W0313 10:36:31.856337 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0d88291e_a7f9_4c77_8b81_50dda9d2ff14.slice/crio-4b2279f7e9b21dd11be17bb657c74ee88013cb874771e8b049bc705f20e9bf41 WatchSource:0}: Error finding container 4b2279f7e9b21dd11be17bb657c74ee88013cb874771e8b049bc705f20e9bf41: Status 404 returned error can't find the container with id 4b2279f7e9b21dd11be17bb657c74ee88013cb874771e8b049bc705f20e9bf41 Mar 13 10:36:31.881406 master-0 kubenswrapper[7271]: I0313 10:36:31.881339 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 10:36:31.917898 master-0 kubenswrapper[7271]: I0313 10:36:31.911975 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-65bc99cdf7-7rjbr"] Mar 13 10:36:31.917898 master-0 kubenswrapper[7271]: W0313 10:36:31.916074 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d72d950_cfb4_4ed5_9ad6_f7266b937493.slice/crio-48e8b214299b9f4db7879f744943297a290919968a8d4c7d50b6a78a9ada043a WatchSource:0}: Error finding container 48e8b214299b9f4db7879f744943297a290919968a8d4c7d50b6a78a9ada043a: Status 404 returned error can't find the container with id 48e8b214299b9f4db7879f744943297a290919968a8d4c7d50b6a78a9ada043a Mar 13 10:36:32.015925 master-0 kubenswrapper[7271]: I0313 10:36:32.007883 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:32.015925 master-0 kubenswrapper[7271]: I0313 10:36:32.013507 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:32.054886 master-0 kubenswrapper[7271]: I0313 10:36:32.049159 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:32.079247 master-0 kubenswrapper[7271]: I0313 10:36:32.079185 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-778fb45b4-65f7b"] Mar 13 10:36:32.080573 master-0 kubenswrapper[7271]: I0313 10:36:32.080524 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf"] Mar 13 10:36:32.081175 master-0 kubenswrapper[7271]: W0313 10:36:32.081130 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb10584c2_ef04_4649_bcb6_9222c9530c3f.slice/crio-3a76099d084f4d1745cd0462dfdd6adb21bbcc918adfa4a88776287e0186cf5c WatchSource:0}: Error finding container 3a76099d084f4d1745cd0462dfdd6adb21bbcc918adfa4a88776287e0186cf5c: Status 404 returned error can't find the container with id 3a76099d084f4d1745cd0462dfdd6adb21bbcc918adfa4a88776287e0186cf5c Mar 13 10:36:32.085208 master-0 kubenswrapper[7271]: I0313 10:36:32.085146 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" event={"ID":"3ff2ab1c-7057-4e18-8e32-68807f86532a","Type":"ContainerStarted","Data":"8c182f021d1015aeedfcafbe78c6b391a34b8d09020ee44c3cd7ffd5f70ea542"} Mar 13 10:36:32.088256 master-0 kubenswrapper[7271]: I0313 10:36:32.088202 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" event={"ID":"42b4d53c-af72-44c8-9605-271445f95f87","Type":"ContainerStarted","Data":"4898ddf0b80011b0f9f0a24077d87c24f74962cf228e87be2367d09c896182b1"} Mar 13 10:36:32.102988 master-0 kubenswrapper[7271]: I0313 10:36:32.102919 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"0d88291e-a7f9-4c77-8b81-50dda9d2ff14","Type":"ContainerStarted","Data":"4b2279f7e9b21dd11be17bb657c74ee88013cb874771e8b049bc705f20e9bf41"} Mar 13 10:36:32.117668 master-0 kubenswrapper[7271]: I0313 10:36:32.117611 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" event={"ID":"1d72d950-cfb4-4ed5-9ad6-f7266b937493","Type":"ContainerStarted","Data":"48e8b214299b9f4db7879f744943297a290919968a8d4c7d50b6a78a9ada043a"} Mar 13 10:36:32.122038 master-0 kubenswrapper[7271]: I0313 10:36:32.121976 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-7wkqw"] Mar 13 10:36:32.124160 master-0 kubenswrapper[7271]: I0313 10:36:32.123096 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" event={"ID":"4aaf36b4-e556-4723-a624-aa2edc69c301","Type":"ContainerStarted","Data":"1179c8b6381aace6c76f7d879ba341fdb74c5cd38ee738210276acc9b790c25d"} Mar 13 10:36:32.124362 master-0 kubenswrapper[7271]: I0313 10:36:32.124337 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.144619 master-0 kubenswrapper[7271]: I0313 10:36:32.142488 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" event={"ID":"8cf9326b-bc23-45c2-82c4-9c08c739ac5a","Type":"ContainerStarted","Data":"43230423fe1ad4b520548b08f0898f9f7d5cb849ac1cf6fadabab03cda0d4f3c"} Mar 13 10:36:32.229329 master-0 kubenswrapper[7271]: I0313 10:36:32.229087 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysctl-d\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.229329 master-0 kubenswrapper[7271]: I0313 10:36:32.229157 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-sys\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.229329 master-0 kubenswrapper[7271]: I0313 10:36:32.229174 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-var-lib-kubelet\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.229329 master-0 kubenswrapper[7271]: I0313 10:36:32.229192 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/d9075a44-22d3-4562-819e-d5a92f013663-etc-tuned\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.229329 master-0 kubenswrapper[7271]: I0313 10:36:32.229226 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d9075a44-22d3-4562-819e-d5a92f013663-tmp\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.229329 master-0 kubenswrapper[7271]: I0313 10:36:32.229310 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysctl-conf\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.229329 master-0 kubenswrapper[7271]: I0313 10:36:32.229330 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-host\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.229683 master-0 kubenswrapper[7271]: I0313 10:36:32.229346 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-run\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.229683 master-0 kubenswrapper[7271]: I0313 10:36:32.229370 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htqw9\" (UniqueName: \"kubernetes.io/projected/d9075a44-22d3-4562-819e-d5a92f013663-kube-api-access-htqw9\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.229683 master-0 kubenswrapper[7271]: I0313 10:36:32.229432 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysconfig\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.229683 master-0 kubenswrapper[7271]: I0313 10:36:32.229447 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-lib-modules\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.229683 master-0 kubenswrapper[7271]: I0313 10:36:32.229473 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-systemd\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.229683 master-0 kubenswrapper[7271]: I0313 10:36:32.229489 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-modprobe-d\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.229683 master-0 kubenswrapper[7271]: I0313 10:36:32.229504 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-kubernetes\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.331413 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysctl-conf\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.331877 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-host\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.331921 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-run\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.331956 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htqw9\" (UniqueName: \"kubernetes.io/projected/d9075a44-22d3-4562-819e-d5a92f013663-kube-api-access-htqw9\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.332039 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysconfig\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.332112 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-lib-modules\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.332162 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-systemd\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.332187 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-modprobe-d\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.332227 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-kubernetes\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.332281 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysctl-d\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.332342 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-sys\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.332383 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-var-lib-kubelet\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.332411 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/d9075a44-22d3-4562-819e-d5a92f013663-etc-tuned\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.332437 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d9075a44-22d3-4562-819e-d5a92f013663-tmp\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.333776 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-systemd\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.334243 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysctl-conf\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.334287 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-host\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.334445 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-run\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.334820 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysconfig\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.334904 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-lib-modules\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.334939 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysctl-d\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.335011 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-modprobe-d\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.335067 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-kubernetes\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.335101 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-var-lib-kubelet\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.336619 master-0 kubenswrapper[7271]: I0313 10:36:32.335133 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-sys\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.341761 master-0 kubenswrapper[7271]: I0313 10:36:32.340196 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d9075a44-22d3-4562-819e-d5a92f013663-tmp\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.341761 master-0 kubenswrapper[7271]: I0313 10:36:32.341058 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/d9075a44-22d3-4562-819e-d5a92f013663-etc-tuned\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.373357 master-0 kubenswrapper[7271]: I0313 10:36:32.373294 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htqw9\" (UniqueName: \"kubernetes.io/projected/d9075a44-22d3-4562-819e-d5a92f013663-kube-api-access-htqw9\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.473265 master-0 kubenswrapper[7271]: I0313 10:36:32.472970 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd"] Mar 13 10:36:32.481712 master-0 kubenswrapper[7271]: W0313 10:36:32.481670 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod257a4a8b_014c_4473_80a0_e95cf6d41bf1.slice/crio-9c60be2c4f1a0c68603912b47e268ac1ef0712a8ed512ee20014ad96ccd12d01 WatchSource:0}: Error finding container 9c60be2c4f1a0c68603912b47e268ac1ef0712a8ed512ee20014ad96ccd12d01: Status 404 returned error can't find the container with id 9c60be2c4f1a0c68603912b47e268ac1ef0712a8ed512ee20014ad96ccd12d01 Mar 13 10:36:32.526962 master-0 kubenswrapper[7271]: I0313 10:36:32.526786 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k"] Mar 13 10:36:32.527615 master-0 kubenswrapper[7271]: E0313 10:36:32.527562 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" podUID="f7bcec3b-b3d9-432d-96b5-ba61d11ab010" Mar 13 10:36:32.547395 master-0 kubenswrapper[7271]: I0313 10:36:32.547330 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw"] Mar 13 10:36:32.547922 master-0 kubenswrapper[7271]: E0313 10:36:32.547882 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" podUID="8d60570a-069b-43fe-be3e-814955fec7ce" Mar 13 10:36:32.573618 master-0 kubenswrapper[7271]: I0313 10:36:32.569022 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:36:32.775130 master-0 kubenswrapper[7271]: I0313 10:36:32.769142 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-zc596"] Mar 13 10:36:32.775130 master-0 kubenswrapper[7271]: I0313 10:36:32.770150 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zc596" Mar 13 10:36:32.775130 master-0 kubenswrapper[7271]: I0313 10:36:32.772383 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 10:36:32.775130 master-0 kubenswrapper[7271]: I0313 10:36:32.772628 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 10:36:32.779715 master-0 kubenswrapper[7271]: I0313 10:36:32.779599 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zc596"] Mar 13 10:36:32.780104 master-0 kubenswrapper[7271]: I0313 10:36:32.780019 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 10:36:32.784609 master-0 kubenswrapper[7271]: I0313 10:36:32.781001 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 10:36:32.856705 master-0 kubenswrapper[7271]: I0313 10:36:32.848955 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11927952-723f-4d6d-922b-73139abe8877-config-volume\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:36:32.856705 master-0 kubenswrapper[7271]: I0313 10:36:32.849035 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11927952-723f-4d6d-922b-73139abe8877-metrics-tls\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:36:32.856705 master-0 kubenswrapper[7271]: I0313 10:36:32.849120 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgb25\" (UniqueName: \"kubernetes.io/projected/11927952-723f-4d6d-922b-73139abe8877-kube-api-access-kgb25\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:36:32.951117 master-0 kubenswrapper[7271]: I0313 10:36:32.950542 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgb25\" (UniqueName: \"kubernetes.io/projected/11927952-723f-4d6d-922b-73139abe8877-kube-api-access-kgb25\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:36:32.951117 master-0 kubenswrapper[7271]: I0313 10:36:32.950956 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11927952-723f-4d6d-922b-73139abe8877-config-volume\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:36:32.951117 master-0 kubenswrapper[7271]: I0313 10:36:32.951102 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11927952-723f-4d6d-922b-73139abe8877-metrics-tls\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:36:32.952350 master-0 kubenswrapper[7271]: E0313 10:36:32.951286 7271 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 13 10:36:32.952350 master-0 kubenswrapper[7271]: E0313 10:36:32.951360 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11927952-723f-4d6d-922b-73139abe8877-metrics-tls podName:11927952-723f-4d6d-922b-73139abe8877 nodeName:}" failed. No retries permitted until 2026-03-13 10:36:33.451339053 +0000 UTC m=+47.978161453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/11927952-723f-4d6d-922b-73139abe8877-metrics-tls") pod "dns-default-zc596" (UID: "11927952-723f-4d6d-922b-73139abe8877") : secret "dns-default-metrics-tls" not found Mar 13 10:36:32.952350 master-0 kubenswrapper[7271]: I0313 10:36:32.951813 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11927952-723f-4d6d-922b-73139abe8877-config-volume\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:36:33.002169 master-0 kubenswrapper[7271]: I0313 10:36:33.002060 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgb25\" (UniqueName: \"kubernetes.io/projected/11927952-723f-4d6d-922b-73139abe8877-kube-api-access-kgb25\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:36:33.088166 master-0 kubenswrapper[7271]: I0313 10:36:33.082698 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tfwn8"] Mar 13 10:36:33.095643 master-0 kubenswrapper[7271]: I0313 10:36:33.095241 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tfwn8" Mar 13 10:36:33.154633 master-0 kubenswrapper[7271]: I0313 10:36:33.154550 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwc4l\" (UniqueName: \"kubernetes.io/projected/e485e709-32ba-442b-98e5-b4073516c0ab-kube-api-access-qwc4l\") pod \"node-resolver-tfwn8\" (UID: \"e485e709-32ba-442b-98e5-b4073516c0ab\") " pod="openshift-dns/node-resolver-tfwn8" Mar 13 10:36:33.154834 master-0 kubenswrapper[7271]: I0313 10:36:33.154676 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e485e709-32ba-442b-98e5-b4073516c0ab-hosts-file\") pod \"node-resolver-tfwn8\" (UID: \"e485e709-32ba-442b-98e5-b4073516c0ab\") " pod="openshift-dns/node-resolver-tfwn8" Mar 13 10:36:33.156225 master-0 kubenswrapper[7271]: I0313 10:36:33.155746 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"0d88291e-a7f9-4c77-8b81-50dda9d2ff14","Type":"ContainerStarted","Data":"97074579cf71d770c29db8ecd79b00b09095ade878a39b32328d1f465a6553a9"} Mar 13 10:36:33.156225 master-0 kubenswrapper[7271]: I0313 10:36:33.155913 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="0d88291e-a7f9-4c77-8b81-50dda9d2ff14" containerName="installer" containerID="cri-o://97074579cf71d770c29db8ecd79b00b09095ade878a39b32328d1f465a6553a9" gracePeriod=30 Mar 13 10:36:33.162795 master-0 kubenswrapper[7271]: I0313 10:36:33.162720 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" event={"ID":"d9075a44-22d3-4562-819e-d5a92f013663","Type":"ContainerStarted","Data":"5dddca029c96ddfe210a10916494b8ff78d792070dc83b6091f6e125bb1cd129"} Mar 13 10:36:33.162795 master-0 kubenswrapper[7271]: I0313 10:36:33.162794 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" event={"ID":"d9075a44-22d3-4562-819e-d5a92f013663","Type":"ContainerStarted","Data":"91e3b8c73add296842ef8e7a2c3aeddee50a28ab7af6145ac329ac27f7fd9e5f"} Mar 13 10:36:33.165044 master-0 kubenswrapper[7271]: I0313 10:36:33.164988 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" event={"ID":"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b","Type":"ContainerStarted","Data":"64e301f64932b9e42866a17f98ce668f6dac597e77b8c15551a291086a0c377b"} Mar 13 10:36:33.170238 master-0 kubenswrapper[7271]: I0313 10:36:33.170161 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" event={"ID":"b10584c2-ef04-4649-bcb6-9222c9530c3f","Type":"ContainerStarted","Data":"9399008275bd9f71491c5fcb5ffc2a50f02e98d1575d51521c59927fcc8b68b4"} Mar 13 10:36:33.170603 master-0 kubenswrapper[7271]: I0313 10:36:33.170288 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" event={"ID":"b10584c2-ef04-4649-bcb6-9222c9530c3f","Type":"ContainerStarted","Data":"f661d164e1cae288da9b5b814f572be1703c2513d35aac45b2b22784229191e4"} Mar 13 10:36:33.170603 master-0 kubenswrapper[7271]: I0313 10:36:33.170306 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" event={"ID":"b10584c2-ef04-4649-bcb6-9222c9530c3f","Type":"ContainerStarted","Data":"3a76099d084f4d1745cd0462dfdd6adb21bbcc918adfa4a88776287e0186cf5c"} Mar 13 10:36:33.181250 master-0 kubenswrapper[7271]: I0313 10:36:33.175080 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" event={"ID":"3ff2ab1c-7057-4e18-8e32-68807f86532a","Type":"ContainerStarted","Data":"7da189f48cca18f2a6513cb42758c09be8999b8f1d8db1e600e78cd3f41ec07d"} Mar 13 10:36:33.181250 master-0 kubenswrapper[7271]: I0313 10:36:33.175572 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=8.175544515 podStartE2EDuration="8.175544515s" podCreationTimestamp="2026-03-13 10:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:36:33.173924811 +0000 UTC m=+47.700747221" watchObservedRunningTime="2026-03-13 10:36:33.175544515 +0000 UTC m=+47.702366905" Mar 13 10:36:33.181250 master-0 kubenswrapper[7271]: I0313 10:36:33.178694 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:36:33.181250 master-0 kubenswrapper[7271]: I0313 10:36:33.179163 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" event={"ID":"257a4a8b-014c-4473-80a0-e95cf6d41bf1","Type":"ContainerStarted","Data":"9c3f29c1b19afddcef9987888e0226a340347a13991d9e5e12422447c0c483b6"} Mar 13 10:36:33.181250 master-0 kubenswrapper[7271]: I0313 10:36:33.179191 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" event={"ID":"257a4a8b-014c-4473-80a0-e95cf6d41bf1","Type":"ContainerStarted","Data":"9c60be2c4f1a0c68603912b47e268ac1ef0712a8ed512ee20014ad96ccd12d01"} Mar 13 10:36:33.181250 master-0 kubenswrapper[7271]: I0313 10:36:33.179240 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:36:33.197177 master-0 kubenswrapper[7271]: I0313 10:36:33.197084 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" podStartSLOduration=9.197060516 podStartE2EDuration="9.197060516s" podCreationTimestamp="2026-03-13 10:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:36:33.196032038 +0000 UTC m=+47.722854428" watchObservedRunningTime="2026-03-13 10:36:33.197060516 +0000 UTC m=+47.723882906" Mar 13 10:36:33.229669 master-0 kubenswrapper[7271]: I0313 10:36:33.229509 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" podStartSLOduration=1.229483161 podStartE2EDuration="1.229483161s" podCreationTimestamp="2026-03-13 10:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:36:33.214527387 +0000 UTC m=+47.741349787" watchObservedRunningTime="2026-03-13 10:36:33.229483161 +0000 UTC m=+47.756305551" Mar 13 10:36:33.257325 master-0 kubenswrapper[7271]: I0313 10:36:33.257241 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwc4l\" (UniqueName: \"kubernetes.io/projected/e485e709-32ba-442b-98e5-b4073516c0ab-kube-api-access-qwc4l\") pod \"node-resolver-tfwn8\" (UID: \"e485e709-32ba-442b-98e5-b4073516c0ab\") " pod="openshift-dns/node-resolver-tfwn8" Mar 13 10:36:33.257574 master-0 kubenswrapper[7271]: I0313 10:36:33.257418 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e485e709-32ba-442b-98e5-b4073516c0ab-hosts-file\") pod \"node-resolver-tfwn8\" (UID: \"e485e709-32ba-442b-98e5-b4073516c0ab\") " pod="openshift-dns/node-resolver-tfwn8" Mar 13 10:36:33.260625 master-0 kubenswrapper[7271]: I0313 10:36:33.260548 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e485e709-32ba-442b-98e5-b4073516c0ab-hosts-file\") pod \"node-resolver-tfwn8\" (UID: \"e485e709-32ba-442b-98e5-b4073516c0ab\") " pod="openshift-dns/node-resolver-tfwn8" Mar 13 10:36:33.273339 master-0 kubenswrapper[7271]: I0313 10:36:33.273275 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:36:33.280933 master-0 kubenswrapper[7271]: I0313 10:36:33.280865 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:36:33.294085 master-0 kubenswrapper[7271]: I0313 10:36:33.293518 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwc4l\" (UniqueName: \"kubernetes.io/projected/e485e709-32ba-442b-98e5-b4073516c0ab-kube-api-access-qwc4l\") pod \"node-resolver-tfwn8\" (UID: \"e485e709-32ba-442b-98e5-b4073516c0ab\") " pod="openshift-dns/node-resolver-tfwn8" Mar 13 10:36:33.358943 master-0 kubenswrapper[7271]: I0313 10:36:33.358780 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-config\") pod \"8d60570a-069b-43fe-be3e-814955fec7ce\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " Mar 13 10:36:33.359273 master-0 kubenswrapper[7271]: I0313 10:36:33.359252 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7hrd\" (UniqueName: \"kubernetes.io/projected/8d60570a-069b-43fe-be3e-814955fec7ce-kube-api-access-d7hrd\") pod \"8d60570a-069b-43fe-be3e-814955fec7ce\" (UID: \"8d60570a-069b-43fe-be3e-814955fec7ce\") " Mar 13 10:36:33.359417 master-0 kubenswrapper[7271]: I0313 10:36:33.359398 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-config\") pod \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " Mar 13 10:36:33.359531 master-0 kubenswrapper[7271]: I0313 10:36:33.359516 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert\") pod \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " Mar 13 10:36:33.359674 master-0 kubenswrapper[7271]: I0313 10:36:33.359656 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-proxy-ca-bundles\") pod \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " Mar 13 10:36:33.359780 master-0 kubenswrapper[7271]: I0313 10:36:33.359767 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxzzm\" (UniqueName: \"kubernetes.io/projected/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-kube-api-access-bxzzm\") pod \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\" (UID: \"f7bcec3b-b3d9-432d-96b5-ba61d11ab010\") " Mar 13 10:36:33.360609 master-0 kubenswrapper[7271]: I0313 10:36:33.360539 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-config" (OuterVolumeSpecName: "config") pod "8d60570a-069b-43fe-be3e-814955fec7ce" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:36:33.360740 master-0 kubenswrapper[7271]: I0313 10:36:33.360623 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-config" (OuterVolumeSpecName: "config") pod "f7bcec3b-b3d9-432d-96b5-ba61d11ab010" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:36:33.361643 master-0 kubenswrapper[7271]: I0313 10:36:33.361171 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f7bcec3b-b3d9-432d-96b5-ba61d11ab010" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:36:33.363782 master-0 kubenswrapper[7271]: I0313 10:36:33.363741 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f7bcec3b-b3d9-432d-96b5-ba61d11ab010" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:36:33.363862 master-0 kubenswrapper[7271]: I0313 10:36:33.363770 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d60570a-069b-43fe-be3e-814955fec7ce-kube-api-access-d7hrd" (OuterVolumeSpecName: "kube-api-access-d7hrd") pod "8d60570a-069b-43fe-be3e-814955fec7ce" (UID: "8d60570a-069b-43fe-be3e-814955fec7ce"). InnerVolumeSpecName "kube-api-access-d7hrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:36:33.364447 master-0 kubenswrapper[7271]: I0313 10:36:33.364403 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-kube-api-access-bxzzm" (OuterVolumeSpecName: "kube-api-access-bxzzm") pod "f7bcec3b-b3d9-432d-96b5-ba61d11ab010" (UID: "f7bcec3b-b3d9-432d-96b5-ba61d11ab010"). InnerVolumeSpecName "kube-api-access-bxzzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:36:33.416475 master-0 kubenswrapper[7271]: I0313 10:36:33.416340 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tfwn8" Mar 13 10:36:33.440319 master-0 kubenswrapper[7271]: W0313 10:36:33.439918 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode485e709_32ba_442b_98e5_b4073516c0ab.slice/crio-427d8baf8b36c464fef89b4b9363187b5106a9ee18a5220827b5f1bf40b93c0d WatchSource:0}: Error finding container 427d8baf8b36c464fef89b4b9363187b5106a9ee18a5220827b5f1bf40b93c0d: Status 404 returned error can't find the container with id 427d8baf8b36c464fef89b4b9363187b5106a9ee18a5220827b5f1bf40b93c0d Mar 13 10:36:33.462951 master-0 kubenswrapper[7271]: I0313 10:36:33.461828 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11927952-723f-4d6d-922b-73139abe8877-metrics-tls\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:36:33.462951 master-0 kubenswrapper[7271]: I0313 10:36:33.462210 7271 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:33.462951 master-0 kubenswrapper[7271]: I0313 10:36:33.462226 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7hrd\" (UniqueName: \"kubernetes.io/projected/8d60570a-069b-43fe-be3e-814955fec7ce-kube-api-access-d7hrd\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:33.462951 master-0 kubenswrapper[7271]: I0313 10:36:33.462239 7271 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:33.462951 master-0 kubenswrapper[7271]: I0313 10:36:33.462252 7271 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:33.462951 master-0 kubenswrapper[7271]: I0313 10:36:33.462263 7271 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:33.462951 master-0 kubenswrapper[7271]: I0313 10:36:33.462276 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxzzm\" (UniqueName: \"kubernetes.io/projected/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-kube-api-access-bxzzm\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:33.465845 master-0 kubenswrapper[7271]: I0313 10:36:33.465817 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11927952-723f-4d6d-922b-73139abe8877-metrics-tls\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:36:33.481731 master-0 kubenswrapper[7271]: I0313 10:36:33.481255 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zc596" Mar 13 10:36:33.491631 master-0 kubenswrapper[7271]: I0313 10:36:33.491465 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_0d88291e-a7f9-4c77-8b81-50dda9d2ff14/installer/0.log" Mar 13 10:36:33.491631 master-0 kubenswrapper[7271]: I0313 10:36:33.491557 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 10:36:33.563042 master-0 kubenswrapper[7271]: I0313 10:36:33.562976 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-kube-api-access\") pod \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\" (UID: \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\") " Mar 13 10:36:33.563289 master-0 kubenswrapper[7271]: I0313 10:36:33.563057 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-kubelet-dir\") pod \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\" (UID: \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\") " Mar 13 10:36:33.563289 master-0 kubenswrapper[7271]: I0313 10:36:33.563194 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-var-lock\") pod \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\" (UID: \"0d88291e-a7f9-4c77-8b81-50dda9d2ff14\") " Mar 13 10:36:33.563289 master-0 kubenswrapper[7271]: I0313 10:36:33.563242 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0d88291e-a7f9-4c77-8b81-50dda9d2ff14" (UID: "0d88291e-a7f9-4c77-8b81-50dda9d2ff14"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:36:33.563428 master-0 kubenswrapper[7271]: I0313 10:36:33.563331 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-var-lock" (OuterVolumeSpecName: "var-lock") pod "0d88291e-a7f9-4c77-8b81-50dda9d2ff14" (UID: "0d88291e-a7f9-4c77-8b81-50dda9d2ff14"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:36:33.563545 master-0 kubenswrapper[7271]: I0313 10:36:33.563512 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:33.563545 master-0 kubenswrapper[7271]: I0313 10:36:33.563528 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:33.567391 master-0 kubenswrapper[7271]: I0313 10:36:33.567339 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0d88291e-a7f9-4c77-8b81-50dda9d2ff14" (UID: "0d88291e-a7f9-4c77-8b81-50dda9d2ff14"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:36:33.664732 master-0 kubenswrapper[7271]: I0313 10:36:33.664685 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d88291e-a7f9-4c77-8b81-50dda9d2ff14-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:33.705682 master-0 kubenswrapper[7271]: I0313 10:36:33.705393 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zc596"] Mar 13 10:36:33.720160 master-0 kubenswrapper[7271]: W0313 10:36:33.720095 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11927952_723f_4d6d_922b_73139abe8877.slice/crio-8b123d8cf30f9a2e585e105a6d1e6a093488b477d996f0893a6a50a5c5b92b38 WatchSource:0}: Error finding container 8b123d8cf30f9a2e585e105a6d1e6a093488b477d996f0893a6a50a5c5b92b38: Status 404 returned error can't find the container with id 8b123d8cf30f9a2e585e105a6d1e6a093488b477d996f0893a6a50a5c5b92b38 Mar 13 10:36:34.189379 master-0 kubenswrapper[7271]: I0313 10:36:34.188662 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tfwn8" event={"ID":"e485e709-32ba-442b-98e5-b4073516c0ab","Type":"ContainerStarted","Data":"b3a04a83cde4ac7f63af613a46a551ae08b8134f1b3d5c5fa496cf7d1a3ac019"} Mar 13 10:36:34.189379 master-0 kubenswrapper[7271]: I0313 10:36:34.189385 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tfwn8" event={"ID":"e485e709-32ba-442b-98e5-b4073516c0ab","Type":"ContainerStarted","Data":"427d8baf8b36c464fef89b4b9363187b5106a9ee18a5220827b5f1bf40b93c0d"} Mar 13 10:36:34.197093 master-0 kubenswrapper[7271]: I0313 10:36:34.197046 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" event={"ID":"257a4a8b-014c-4473-80a0-e95cf6d41bf1","Type":"ContainerStarted","Data":"5f05908e71448e64ca18d1219369017d904e020901e65c57a4853144db037d28"} Mar 13 10:36:34.197445 master-0 kubenswrapper[7271]: I0313 10:36:34.197374 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:34.198620 master-0 kubenswrapper[7271]: I0313 10:36:34.198535 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zc596" event={"ID":"11927952-723f-4d6d-922b-73139abe8877","Type":"ContainerStarted","Data":"8b123d8cf30f9a2e585e105a6d1e6a093488b477d996f0893a6a50a5c5b92b38"} Mar 13 10:36:34.200896 master-0 kubenswrapper[7271]: I0313 10:36:34.200848 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_0d88291e-a7f9-4c77-8b81-50dda9d2ff14/installer/0.log" Mar 13 10:36:34.200952 master-0 kubenswrapper[7271]: I0313 10:36:34.200917 7271 generic.go:334] "Generic (PLEG): container finished" podID="0d88291e-a7f9-4c77-8b81-50dda9d2ff14" containerID="97074579cf71d770c29db8ecd79b00b09095ade878a39b32328d1f465a6553a9" exitCode=1 Mar 13 10:36:34.201005 master-0 kubenswrapper[7271]: I0313 10:36:34.200990 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw" Mar 13 10:36:34.201116 master-0 kubenswrapper[7271]: I0313 10:36:34.201085 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 10:36:34.201700 master-0 kubenswrapper[7271]: I0313 10:36:34.201635 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"0d88291e-a7f9-4c77-8b81-50dda9d2ff14","Type":"ContainerDied","Data":"97074579cf71d770c29db8ecd79b00b09095ade878a39b32328d1f465a6553a9"} Mar 13 10:36:34.201765 master-0 kubenswrapper[7271]: I0313 10:36:34.201718 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"0d88291e-a7f9-4c77-8b81-50dda9d2ff14","Type":"ContainerDied","Data":"4b2279f7e9b21dd11be17bb657c74ee88013cb874771e8b049bc705f20e9bf41"} Mar 13 10:36:34.201765 master-0 kubenswrapper[7271]: I0313 10:36:34.201758 7271 scope.go:117] "RemoveContainer" containerID="97074579cf71d770c29db8ecd79b00b09095ade878a39b32328d1f465a6553a9" Mar 13 10:36:34.203158 master-0 kubenswrapper[7271]: I0313 10:36:34.202980 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k" Mar 13 10:36:34.203158 master-0 kubenswrapper[7271]: I0313 10:36:34.203016 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:34.206711 master-0 kubenswrapper[7271]: I0313 10:36:34.206657 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tfwn8" podStartSLOduration=1.206640498 podStartE2EDuration="1.206640498s" podCreationTimestamp="2026-03-13 10:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:36:34.204224882 +0000 UTC m=+48.731047272" watchObservedRunningTime="2026-03-13 10:36:34.206640498 +0000 UTC m=+48.733462888" Mar 13 10:36:34.221458 master-0 kubenswrapper[7271]: I0313 10:36:34.221349 7271 scope.go:117] "RemoveContainer" containerID="97074579cf71d770c29db8ecd79b00b09095ade878a39b32328d1f465a6553a9" Mar 13 10:36:34.222042 master-0 kubenswrapper[7271]: E0313 10:36:34.222009 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97074579cf71d770c29db8ecd79b00b09095ade878a39b32328d1f465a6553a9\": container with ID starting with 97074579cf71d770c29db8ecd79b00b09095ade878a39b32328d1f465a6553a9 not found: ID does not exist" containerID="97074579cf71d770c29db8ecd79b00b09095ade878a39b32328d1f465a6553a9" Mar 13 10:36:34.222210 master-0 kubenswrapper[7271]: I0313 10:36:34.222050 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97074579cf71d770c29db8ecd79b00b09095ade878a39b32328d1f465a6553a9"} err="failed to get container status \"97074579cf71d770c29db8ecd79b00b09095ade878a39b32328d1f465a6553a9\": rpc error: code = NotFound desc = could not find container \"97074579cf71d770c29db8ecd79b00b09095ade878a39b32328d1f465a6553a9\": container with ID starting with 97074579cf71d770c29db8ecd79b00b09095ade878a39b32328d1f465a6553a9 not found: ID does not exist" Mar 13 10:36:34.229769 master-0 kubenswrapper[7271]: I0313 10:36:34.229673 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" podStartSLOduration=10.229651279 podStartE2EDuration="10.229651279s" podCreationTimestamp="2026-03-13 10:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:36:34.228280182 +0000 UTC m=+48.755102582" watchObservedRunningTime="2026-03-13 10:36:34.229651279 +0000 UTC m=+48.756473669" Mar 13 10:36:34.298128 master-0 kubenswrapper[7271]: I0313 10:36:34.263198 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw"] Mar 13 10:36:34.298128 master-0 kubenswrapper[7271]: I0313 10:36:34.266834 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc5589ff-d48hw"] Mar 13 10:36:34.324790 master-0 kubenswrapper[7271]: I0313 10:36:34.317967 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 10:36:34.326279 master-0 kubenswrapper[7271]: I0313 10:36:34.325307 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 10:36:34.350737 master-0 kubenswrapper[7271]: I0313 10:36:34.350680 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 10:36:34.351192 master-0 kubenswrapper[7271]: E0313 10:36:34.351178 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d88291e-a7f9-4c77-8b81-50dda9d2ff14" containerName="installer" Mar 13 10:36:34.351264 master-0 kubenswrapper[7271]: I0313 10:36:34.351254 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d88291e-a7f9-4c77-8b81-50dda9d2ff14" containerName="installer" Mar 13 10:36:34.351401 master-0 kubenswrapper[7271]: I0313 10:36:34.351389 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d88291e-a7f9-4c77-8b81-50dda9d2ff14" containerName="installer" Mar 13 10:36:34.351836 master-0 kubenswrapper[7271]: I0313 10:36:34.351816 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 10:36:34.361638 master-0 kubenswrapper[7271]: I0313 10:36:34.361229 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 10:36:34.382109 master-0 kubenswrapper[7271]: I0313 10:36:34.382064 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k"] Mar 13 10:36:34.387286 master-0 kubenswrapper[7271]: I0313 10:36:34.386456 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c5b48d77b-g5f7k"] Mar 13 10:36:34.402299 master-0 kubenswrapper[7271]: I0313 10:36:34.402237 7271 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d60570a-069b-43fe-be3e-814955fec7ce-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:34.402299 master-0 kubenswrapper[7271]: I0313 10:36:34.402282 7271 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d60570a-069b-43fe-be3e-814955fec7ce-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:34.506181 master-0 kubenswrapper[7271]: I0313 10:36:34.505650 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9847187-75a5-4ae5-9e53-7968725bae6e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"b9847187-75a5-4ae5-9e53-7968725bae6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 10:36:34.506181 master-0 kubenswrapper[7271]: I0313 10:36:34.505719 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9847187-75a5-4ae5-9e53-7968725bae6e-var-lock\") pod \"installer-3-master-0\" (UID: \"b9847187-75a5-4ae5-9e53-7968725bae6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 10:36:34.506181 master-0 kubenswrapper[7271]: I0313 10:36:34.505787 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9847187-75a5-4ae5-9e53-7968725bae6e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"b9847187-75a5-4ae5-9e53-7968725bae6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 10:36:34.506181 master-0 kubenswrapper[7271]: I0313 10:36:34.505926 7271 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7bcec3b-b3d9-432d-96b5-ba61d11ab010-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:34.607611 master-0 kubenswrapper[7271]: I0313 10:36:34.607074 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9847187-75a5-4ae5-9e53-7968725bae6e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"b9847187-75a5-4ae5-9e53-7968725bae6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 10:36:34.607611 master-0 kubenswrapper[7271]: I0313 10:36:34.607229 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9847187-75a5-4ae5-9e53-7968725bae6e-var-lock\") pod \"installer-3-master-0\" (UID: \"b9847187-75a5-4ae5-9e53-7968725bae6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 10:36:34.607611 master-0 kubenswrapper[7271]: I0313 10:36:34.607285 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9847187-75a5-4ae5-9e53-7968725bae6e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"b9847187-75a5-4ae5-9e53-7968725bae6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 10:36:34.607611 master-0 kubenswrapper[7271]: I0313 10:36:34.607490 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9847187-75a5-4ae5-9e53-7968725bae6e-var-lock\") pod \"installer-3-master-0\" (UID: \"b9847187-75a5-4ae5-9e53-7968725bae6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 10:36:34.609224 master-0 kubenswrapper[7271]: I0313 10:36:34.609073 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9847187-75a5-4ae5-9e53-7968725bae6e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"b9847187-75a5-4ae5-9e53-7968725bae6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 10:36:34.628710 master-0 kubenswrapper[7271]: I0313 10:36:34.628529 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9847187-75a5-4ae5-9e53-7968725bae6e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"b9847187-75a5-4ae5-9e53-7968725bae6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 10:36:34.688059 master-0 kubenswrapper[7271]: I0313 10:36:34.687381 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 10:36:35.652865 master-0 kubenswrapper[7271]: I0313 10:36:35.652798 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d88291e-a7f9-4c77-8b81-50dda9d2ff14" path="/var/lib/kubelet/pods/0d88291e-a7f9-4c77-8b81-50dda9d2ff14/volumes" Mar 13 10:36:35.653826 master-0 kubenswrapper[7271]: I0313 10:36:35.653449 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d60570a-069b-43fe-be3e-814955fec7ce" path="/var/lib/kubelet/pods/8d60570a-069b-43fe-be3e-814955fec7ce/volumes" Mar 13 10:36:35.653971 master-0 kubenswrapper[7271]: I0313 10:36:35.653942 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7bcec3b-b3d9-432d-96b5-ba61d11ab010" path="/var/lib/kubelet/pods/f7bcec3b-b3d9-432d-96b5-ba61d11ab010/volumes" Mar 13 10:36:36.093120 master-0 kubenswrapper[7271]: I0313 10:36:36.093029 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6954c8766d-g8z48"] Mar 13 10:36:36.094284 master-0 kubenswrapper[7271]: I0313 10:36:36.094243 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.094718 master-0 kubenswrapper[7271]: I0313 10:36:36.094693 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm"] Mar 13 10:36:36.095299 master-0 kubenswrapper[7271]: I0313 10:36:36.095265 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:36.096915 master-0 kubenswrapper[7271]: I0313 10:36:36.096850 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 10:36:36.096915 master-0 kubenswrapper[7271]: I0313 10:36:36.096908 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 10:36:36.097088 master-0 kubenswrapper[7271]: I0313 10:36:36.096908 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 10:36:36.097221 master-0 kubenswrapper[7271]: I0313 10:36:36.097205 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 10:36:36.097319 master-0 kubenswrapper[7271]: I0313 10:36:36.097290 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 10:36:36.101230 master-0 kubenswrapper[7271]: I0313 10:36:36.101201 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 10:36:36.101230 master-0 kubenswrapper[7271]: I0313 10:36:36.101211 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 10:36:36.101360 master-0 kubenswrapper[7271]: I0313 10:36:36.101290 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 10:36:36.103294 master-0 kubenswrapper[7271]: I0313 10:36:36.103273 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 10:36:36.106012 master-0 kubenswrapper[7271]: I0313 10:36:36.105968 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 10:36:36.112922 master-0 kubenswrapper[7271]: I0313 10:36:36.112837 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 10:36:36.119260 master-0 kubenswrapper[7271]: I0313 10:36:36.119200 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm"] Mar 13 10:36:36.121178 master-0 kubenswrapper[7271]: I0313 10:36:36.121105 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6954c8766d-g8z48"] Mar 13 10:36:36.239720 master-0 kubenswrapper[7271]: I0313 10:36:36.239662 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6317b62a-46e2-4a45-9c29-cb04c40d4425-serving-cert\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.239720 master-0 kubenswrapper[7271]: I0313 10:36:36.239712 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d239be49-f88d-46e3-a101-3a46119597ce-serving-cert\") pod \"route-controller-manager-657b8bf46d-r5dxm\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:36.240377 master-0 kubenswrapper[7271]: I0313 10:36:36.239885 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pmgf\" (UniqueName: \"kubernetes.io/projected/d239be49-f88d-46e3-a101-3a46119597ce-kube-api-access-7pmgf\") pod \"route-controller-manager-657b8bf46d-r5dxm\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:36.240377 master-0 kubenswrapper[7271]: I0313 10:36:36.239987 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-proxy-ca-bundles\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.240377 master-0 kubenswrapper[7271]: I0313 10:36:36.240016 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d239be49-f88d-46e3-a101-3a46119597ce-client-ca\") pod \"route-controller-manager-657b8bf46d-r5dxm\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:36.240377 master-0 kubenswrapper[7271]: I0313 10:36:36.240063 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d239be49-f88d-46e3-a101-3a46119597ce-config\") pod \"route-controller-manager-657b8bf46d-r5dxm\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:36.240377 master-0 kubenswrapper[7271]: I0313 10:36:36.240079 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-config\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.240377 master-0 kubenswrapper[7271]: I0313 10:36:36.240103 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-client-ca\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.240377 master-0 kubenswrapper[7271]: I0313 10:36:36.240119 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8dd\" (UniqueName: \"kubernetes.io/projected/6317b62a-46e2-4a45-9c29-cb04c40d4425-kube-api-access-2d8dd\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.341498 master-0 kubenswrapper[7271]: I0313 10:36:36.341377 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d239be49-f88d-46e3-a101-3a46119597ce-client-ca\") pod \"route-controller-manager-657b8bf46d-r5dxm\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:36.341832 master-0 kubenswrapper[7271]: I0313 10:36:36.341681 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d239be49-f88d-46e3-a101-3a46119597ce-config\") pod \"route-controller-manager-657b8bf46d-r5dxm\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:36.341832 master-0 kubenswrapper[7271]: I0313 10:36:36.341757 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-config\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.341832 master-0 kubenswrapper[7271]: I0313 10:36:36.341819 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-client-ca\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.342139 master-0 kubenswrapper[7271]: I0313 10:36:36.342109 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8dd\" (UniqueName: \"kubernetes.io/projected/6317b62a-46e2-4a45-9c29-cb04c40d4425-kube-api-access-2d8dd\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.342214 master-0 kubenswrapper[7271]: I0313 10:36:36.342158 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6317b62a-46e2-4a45-9c29-cb04c40d4425-serving-cert\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.342214 master-0 kubenswrapper[7271]: I0313 10:36:36.342188 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d239be49-f88d-46e3-a101-3a46119597ce-serving-cert\") pod \"route-controller-manager-657b8bf46d-r5dxm\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:36.342623 master-0 kubenswrapper[7271]: I0313 10:36:36.342233 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pmgf\" (UniqueName: \"kubernetes.io/projected/d239be49-f88d-46e3-a101-3a46119597ce-kube-api-access-7pmgf\") pod \"route-controller-manager-657b8bf46d-r5dxm\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:36.342623 master-0 kubenswrapper[7271]: I0313 10:36:36.342262 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-proxy-ca-bundles\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.342623 master-0 kubenswrapper[7271]: I0313 10:36:36.342280 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d239be49-f88d-46e3-a101-3a46119597ce-client-ca\") pod \"route-controller-manager-657b8bf46d-r5dxm\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:36.342822 master-0 kubenswrapper[7271]: I0313 10:36:36.342746 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d239be49-f88d-46e3-a101-3a46119597ce-config\") pod \"route-controller-manager-657b8bf46d-r5dxm\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:36.343631 master-0 kubenswrapper[7271]: I0313 10:36:36.343512 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-config\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.343723 master-0 kubenswrapper[7271]: I0313 10:36:36.343688 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-proxy-ca-bundles\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.344666 master-0 kubenswrapper[7271]: I0313 10:36:36.344627 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-client-ca\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.346103 master-0 kubenswrapper[7271]: I0313 10:36:36.346062 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6317b62a-46e2-4a45-9c29-cb04c40d4425-serving-cert\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.346654 master-0 kubenswrapper[7271]: I0313 10:36:36.346601 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d239be49-f88d-46e3-a101-3a46119597ce-serving-cert\") pod \"route-controller-manager-657b8bf46d-r5dxm\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:36.367986 master-0 kubenswrapper[7271]: I0313 10:36:36.367934 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8dd\" (UniqueName: \"kubernetes.io/projected/6317b62a-46e2-4a45-9c29-cb04c40d4425-kube-api-access-2d8dd\") pod \"controller-manager-6954c8766d-g8z48\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.370642 master-0 kubenswrapper[7271]: I0313 10:36:36.370526 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pmgf\" (UniqueName: \"kubernetes.io/projected/d239be49-f88d-46e3-a101-3a46119597ce-kube-api-access-7pmgf\") pod \"route-controller-manager-657b8bf46d-r5dxm\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:36.435678 master-0 kubenswrapper[7271]: I0313 10:36:36.435611 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:36.457479 master-0 kubenswrapper[7271]: I0313 10:36:36.457370 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:37.492343 master-0 kubenswrapper[7271]: I0313 10:36:37.492276 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm"] Mar 13 10:36:37.513187 master-0 kubenswrapper[7271]: W0313 10:36:37.511212 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd239be49_f88d_46e3_a101_3a46119597ce.slice/crio-4696b518053adef4bb11b654559eaa82546e4638ff3b69c3346ba410132ca32c WatchSource:0}: Error finding container 4696b518053adef4bb11b654559eaa82546e4638ff3b69c3346ba410132ca32c: Status 404 returned error can't find the container with id 4696b518053adef4bb11b654559eaa82546e4638ff3b69c3346ba410132ca32c Mar 13 10:36:37.749779 master-0 kubenswrapper[7271]: I0313 10:36:37.749694 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 10:36:37.808249 master-0 kubenswrapper[7271]: I0313 10:36:37.808182 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6954c8766d-g8z48"] Mar 13 10:36:38.227160 master-0 kubenswrapper[7271]: I0313 10:36:38.227074 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" event={"ID":"6317b62a-46e2-4a45-9c29-cb04c40d4425","Type":"ContainerStarted","Data":"86cb046a6c3fac4fbe29befba2b5b8736fb3773273af51b8d6b5596b1388eb8c"} Mar 13 10:36:38.228989 master-0 kubenswrapper[7271]: I0313 10:36:38.228929 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zc596" event={"ID":"11927952-723f-4d6d-922b-73139abe8877","Type":"ContainerStarted","Data":"3c2a76aa8a2a0f72e04e58d28a6c17810f002ff7ee246e2b7c870e1ddabbbbd7"} Mar 13 10:36:38.229056 master-0 kubenswrapper[7271]: I0313 10:36:38.229024 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zc596" event={"ID":"11927952-723f-4d6d-922b-73139abe8877","Type":"ContainerStarted","Data":"568fb3ed1880b05cf16c7dc7a849b5b839c9cd68fd66552220140def6e95a172"} Mar 13 10:36:38.229123 master-0 kubenswrapper[7271]: I0313 10:36:38.229084 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-zc596" Mar 13 10:36:38.231760 master-0 kubenswrapper[7271]: I0313 10:36:38.231704 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"b9847187-75a5-4ae5-9e53-7968725bae6e","Type":"ContainerStarted","Data":"e46a0e87a063e0f3b6564d9616af4ee6a3b34fd616a438be1e7eedb82547f0fd"} Mar 13 10:36:38.231760 master-0 kubenswrapper[7271]: I0313 10:36:38.231755 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"b9847187-75a5-4ae5-9e53-7968725bae6e","Type":"ContainerStarted","Data":"7beb708d725500bd51bcb3edd18db25c31d800cb8f2a0303ae85fcd6d4d806af"} Mar 13 10:36:38.233689 master-0 kubenswrapper[7271]: I0313 10:36:38.233647 7271 generic.go:334] "Generic (PLEG): container finished" podID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerID="0647723824e586709d350ad5bb33b6a1dfb3aeaa2aa48bea8b456cd7a39c8a13" exitCode=0 Mar 13 10:36:38.233752 master-0 kubenswrapper[7271]: I0313 10:36:38.233712 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" event={"ID":"1d72d950-cfb4-4ed5-9ad6-f7266b937493","Type":"ContainerDied","Data":"0647723824e586709d350ad5bb33b6a1dfb3aeaa2aa48bea8b456cd7a39c8a13"} Mar 13 10:36:38.236341 master-0 kubenswrapper[7271]: I0313 10:36:38.236295 7271 generic.go:334] "Generic (PLEG): container finished" podID="4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b" containerID="8e7c2978cc4dfb448748849f09d2780b89faa57635195de3a271f009a5331f69" exitCode=0 Mar 13 10:36:38.236459 master-0 kubenswrapper[7271]: I0313 10:36:38.236393 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" event={"ID":"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b","Type":"ContainerDied","Data":"8e7c2978cc4dfb448748849f09d2780b89faa57635195de3a271f009a5331f69"} Mar 13 10:36:38.238450 master-0 kubenswrapper[7271]: I0313 10:36:38.238376 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" event={"ID":"d239be49-f88d-46e3-a101-3a46119597ce","Type":"ContainerStarted","Data":"4696b518053adef4bb11b654559eaa82546e4638ff3b69c3346ba410132ca32c"} Mar 13 10:36:38.249739 master-0 kubenswrapper[7271]: I0313 10:36:38.249099 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-zc596" podStartSLOduration=2.619961324 podStartE2EDuration="6.249072113s" podCreationTimestamp="2026-03-13 10:36:32 +0000 UTC" firstStartedPulling="2026-03-13 10:36:33.723160467 +0000 UTC m=+48.249982857" lastFinishedPulling="2026-03-13 10:36:37.352271256 +0000 UTC m=+51.879093646" observedRunningTime="2026-03-13 10:36:38.248626031 +0000 UTC m=+52.775448421" watchObservedRunningTime="2026-03-13 10:36:38.249072113 +0000 UTC m=+52.775894503" Mar 13 10:36:38.270493 master-0 kubenswrapper[7271]: I0313 10:36:38.270414 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=4.270384438 podStartE2EDuration="4.270384438s" podCreationTimestamp="2026-03-13 10:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:36:38.270374758 +0000 UTC m=+52.797197148" watchObservedRunningTime="2026-03-13 10:36:38.270384438 +0000 UTC m=+52.797206828" Mar 13 10:36:39.260265 master-0 kubenswrapper[7271]: I0313 10:36:39.260207 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" event={"ID":"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b","Type":"ContainerStarted","Data":"cc4631dd1b725b75e1923893d61a36d6a24f70dbacf289dfef1c4b665f7eda76"} Mar 13 10:36:39.283767 master-0 kubenswrapper[7271]: I0313 10:36:39.281695 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" event={"ID":"1d72d950-cfb4-4ed5-9ad6-f7266b937493","Type":"ContainerStarted","Data":"a477cdecbd005d18e3705183106e7e06fb65e4e4319898b5877c4074bc027812"} Mar 13 10:36:39.283767 master-0 kubenswrapper[7271]: I0313 10:36:39.281806 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" event={"ID":"1d72d950-cfb4-4ed5-9ad6-f7266b937493","Type":"ContainerStarted","Data":"cee24429dde24f5526e6f382f585202e09ad0934af66aea44393ab4d5b3b0c7f"} Mar 13 10:36:39.343198 master-0 kubenswrapper[7271]: I0313 10:36:39.343081 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" podStartSLOduration=12.10109979 podStartE2EDuration="17.343059143s" podCreationTimestamp="2026-03-13 10:36:22 +0000 UTC" firstStartedPulling="2026-03-13 10:36:32.07613494 +0000 UTC m=+46.602957330" lastFinishedPulling="2026-03-13 10:36:37.318094293 +0000 UTC m=+51.844916683" observedRunningTime="2026-03-13 10:36:39.335331144 +0000 UTC m=+53.862153554" watchObservedRunningTime="2026-03-13 10:36:39.343059143 +0000 UTC m=+53.869881533" Mar 13 10:36:39.459056 master-0 kubenswrapper[7271]: I0313 10:36:39.458969 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:39.459056 master-0 kubenswrapper[7271]: I0313 10:36:39.459028 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:36:39.672739 master-0 kubenswrapper[7271]: I0313 10:36:39.672579 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podStartSLOduration=7.279391732 podStartE2EDuration="12.672469034s" podCreationTimestamp="2026-03-13 10:36:27 +0000 UTC" firstStartedPulling="2026-03-13 10:36:31.933560722 +0000 UTC m=+46.460383112" lastFinishedPulling="2026-03-13 10:36:37.326638024 +0000 UTC m=+51.853460414" observedRunningTime="2026-03-13 10:36:39.671337683 +0000 UTC m=+54.198160073" watchObservedRunningTime="2026-03-13 10:36:39.672469034 +0000 UTC m=+54.199291444" Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: I0313 10:36:41.107228 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: [+]etcd ok Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:36:41.107308 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:36:41.108335 master-0 kubenswrapper[7271]: I0313 10:36:41.107325 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:36:41.348985 master-0 kubenswrapper[7271]: I0313 10:36:41.348897 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 10:36:41.349295 master-0 kubenswrapper[7271]: I0313 10:36:41.349240 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="b9847187-75a5-4ae5-9e53-7968725bae6e" containerName="installer" containerID="cri-o://e46a0e87a063e0f3b6564d9616af4ee6a3b34fd616a438be1e7eedb82547f0fd" gracePeriod=30 Mar 13 10:36:41.395468 master-0 kubenswrapper[7271]: I0313 10:36:41.395285 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:41.396051 master-0 kubenswrapper[7271]: I0313 10:36:41.395982 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:41.423844 master-0 kubenswrapper[7271]: I0313 10:36:41.423810 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:42.054094 master-0 kubenswrapper[7271]: I0313 10:36:42.054003 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:36:42.723625 master-0 kubenswrapper[7271]: I0313 10:36:42.719723 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:36:43.920978 master-0 kubenswrapper[7271]: I0313 10:36:43.920894 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 10:36:43.921830 master-0 kubenswrapper[7271]: I0313 10:36:43.921786 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 10:36:44.004666 master-0 kubenswrapper[7271]: I0313 10:36:44.004551 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e06733a-9c47-4bcf-a5e2-946db8e2714b-var-lock\") pod \"installer-4-master-0\" (UID: \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 10:36:44.004948 master-0 kubenswrapper[7271]: I0313 10:36:44.004722 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e06733a-9c47-4bcf-a5e2-946db8e2714b-kube-api-access\") pod \"installer-4-master-0\" (UID: \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 10:36:44.004948 master-0 kubenswrapper[7271]: I0313 10:36:44.004807 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e06733a-9c47-4bcf-a5e2-946db8e2714b-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 10:36:44.108617 master-0 kubenswrapper[7271]: I0313 10:36:44.107494 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e06733a-9c47-4bcf-a5e2-946db8e2714b-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 10:36:44.108617 master-0 kubenswrapper[7271]: I0313 10:36:44.107577 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e06733a-9c47-4bcf-a5e2-946db8e2714b-var-lock\") pod \"installer-4-master-0\" (UID: \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 10:36:44.108617 master-0 kubenswrapper[7271]: I0313 10:36:44.107720 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e06733a-9c47-4bcf-a5e2-946db8e2714b-kube-api-access\") pod \"installer-4-master-0\" (UID: \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 10:36:44.108617 master-0 kubenswrapper[7271]: I0313 10:36:44.108476 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e06733a-9c47-4bcf-a5e2-946db8e2714b-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 10:36:44.108617 master-0 kubenswrapper[7271]: I0313 10:36:44.108530 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e06733a-9c47-4bcf-a5e2-946db8e2714b-var-lock\") pod \"installer-4-master-0\" (UID: \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 10:36:44.255695 master-0 kubenswrapper[7271]: I0313 10:36:44.250265 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 10:36:44.294935 master-0 kubenswrapper[7271]: I0313 10:36:44.294498 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e06733a-9c47-4bcf-a5e2-946db8e2714b-kube-api-access\") pod \"installer-4-master-0\" (UID: \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: I0313 10:36:44.463204 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: [+]etcd ok Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:36:44.471118 master-0 kubenswrapper[7271]: I0313 10:36:44.463296 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:36:44.544804 master-0 kubenswrapper[7271]: I0313 10:36:44.544071 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 10:36:44.724465 master-0 kubenswrapper[7271]: I0313 10:36:44.722116 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_b9847187-75a5-4ae5-9e53-7968725bae6e/installer/0.log" Mar 13 10:36:44.724465 master-0 kubenswrapper[7271]: I0313 10:36:44.722154 7271 generic.go:334] "Generic (PLEG): container finished" podID="b9847187-75a5-4ae5-9e53-7968725bae6e" containerID="e46a0e87a063e0f3b6564d9616af4ee6a3b34fd616a438be1e7eedb82547f0fd" exitCode=1 Mar 13 10:36:44.724465 master-0 kubenswrapper[7271]: I0313 10:36:44.722203 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"b9847187-75a5-4ae5-9e53-7968725bae6e","Type":"ContainerDied","Data":"e46a0e87a063e0f3b6564d9616af4ee6a3b34fd616a438be1e7eedb82547f0fd"} Mar 13 10:36:44.739241 master-0 kubenswrapper[7271]: I0313 10:36:44.739059 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf/installer/0.log" Mar 13 10:36:44.739241 master-0 kubenswrapper[7271]: I0313 10:36:44.739121 7271 generic.go:334] "Generic (PLEG): container finished" podID="ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf" containerID="d84eecae63542ba948d53d567f42cad7dd26e9b2bfc0e6b741cc53afc3e9e71f" exitCode=1 Mar 13 10:36:44.739241 master-0 kubenswrapper[7271]: I0313 10:36:44.739221 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf","Type":"ContainerDied","Data":"d84eecae63542ba948d53d567f42cad7dd26e9b2bfc0e6b741cc53afc3e9e71f"} Mar 13 10:36:44.742628 master-0 kubenswrapper[7271]: I0313 10:36:44.741801 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" event={"ID":"d239be49-f88d-46e3-a101-3a46119597ce","Type":"ContainerStarted","Data":"9a7412046a658318247dec7713ea99b14482d2ecbdfa4d40aa9244ac9b9a17de"} Mar 13 10:36:44.760786 master-0 kubenswrapper[7271]: I0313 10:36:44.760391 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf/installer/0.log" Mar 13 10:36:44.760786 master-0 kubenswrapper[7271]: I0313 10:36:44.760483 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 10:36:44.785661 master-0 kubenswrapper[7271]: I0313 10:36:44.784895 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_b9847187-75a5-4ae5-9e53-7968725bae6e/installer/0.log" Mar 13 10:36:44.785661 master-0 kubenswrapper[7271]: I0313 10:36:44.784990 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 10:36:44.817921 master-0 kubenswrapper[7271]: I0313 10:36:44.816541 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9847187-75a5-4ae5-9e53-7968725bae6e-kube-api-access\") pod \"b9847187-75a5-4ae5-9e53-7968725bae6e\" (UID: \"b9847187-75a5-4ae5-9e53-7968725bae6e\") " Mar 13 10:36:44.817921 master-0 kubenswrapper[7271]: I0313 10:36:44.816670 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-var-lock\") pod \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\" (UID: \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\") " Mar 13 10:36:44.817921 master-0 kubenswrapper[7271]: I0313 10:36:44.816703 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9847187-75a5-4ae5-9e53-7968725bae6e-var-lock\") pod \"b9847187-75a5-4ae5-9e53-7968725bae6e\" (UID: \"b9847187-75a5-4ae5-9e53-7968725bae6e\") " Mar 13 10:36:44.817921 master-0 kubenswrapper[7271]: I0313 10:36:44.816764 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9847187-75a5-4ae5-9e53-7968725bae6e-kubelet-dir\") pod \"b9847187-75a5-4ae5-9e53-7968725bae6e\" (UID: \"b9847187-75a5-4ae5-9e53-7968725bae6e\") " Mar 13 10:36:44.817921 master-0 kubenswrapper[7271]: I0313 10:36:44.816794 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-kube-api-access\") pod \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\" (UID: \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\") " Mar 13 10:36:44.817921 master-0 kubenswrapper[7271]: I0313 10:36:44.816830 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-kubelet-dir\") pod \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\" (UID: \"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf\") " Mar 13 10:36:44.817921 master-0 kubenswrapper[7271]: I0313 10:36:44.817452 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf" (UID: "ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:36:44.817921 master-0 kubenswrapper[7271]: I0313 10:36:44.817506 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9847187-75a5-4ae5-9e53-7968725bae6e-var-lock" (OuterVolumeSpecName: "var-lock") pod "b9847187-75a5-4ae5-9e53-7968725bae6e" (UID: "b9847187-75a5-4ae5-9e53-7968725bae6e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:36:44.817921 master-0 kubenswrapper[7271]: I0313 10:36:44.817525 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-var-lock" (OuterVolumeSpecName: "var-lock") pod "ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf" (UID: "ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:36:44.817921 master-0 kubenswrapper[7271]: I0313 10:36:44.817541 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9847187-75a5-4ae5-9e53-7968725bae6e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b9847187-75a5-4ae5-9e53-7968725bae6e" (UID: "b9847187-75a5-4ae5-9e53-7968725bae6e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:36:44.822362 master-0 kubenswrapper[7271]: I0313 10:36:44.822222 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf" (UID: "ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:36:44.822362 master-0 kubenswrapper[7271]: I0313 10:36:44.822303 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9847187-75a5-4ae5-9e53-7968725bae6e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b9847187-75a5-4ae5-9e53-7968725bae6e" (UID: "b9847187-75a5-4ae5-9e53-7968725bae6e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:36:44.822631 master-0 kubenswrapper[7271]: I0313 10:36:44.822552 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:44.822631 master-0 kubenswrapper[7271]: I0313 10:36:44.822594 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9847187-75a5-4ae5-9e53-7968725bae6e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:44.822631 master-0 kubenswrapper[7271]: I0313 10:36:44.822606 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9847187-75a5-4ae5-9e53-7968725bae6e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:44.822631 master-0 kubenswrapper[7271]: I0313 10:36:44.822618 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:44.822631 master-0 kubenswrapper[7271]: I0313 10:36:44.822629 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:44.822801 master-0 kubenswrapper[7271]: I0313 10:36:44.822639 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9847187-75a5-4ae5-9e53-7968725bae6e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:44.992296 master-0 kubenswrapper[7271]: I0313 10:36:44.992185 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" podStartSLOduration=9.360906749 podStartE2EDuration="12.992161085s" podCreationTimestamp="2026-03-13 10:36:32 +0000 UTC" firstStartedPulling="2026-03-13 10:36:37.515273966 +0000 UTC m=+52.042096356" lastFinishedPulling="2026-03-13 10:36:41.146528302 +0000 UTC m=+55.673350692" observedRunningTime="2026-03-13 10:36:44.849103134 +0000 UTC m=+59.375925524" watchObservedRunningTime="2026-03-13 10:36:44.992161085 +0000 UTC m=+59.518983475" Mar 13 10:36:45.122264 master-0 kubenswrapper[7271]: I0313 10:36:45.121643 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 10:36:45.131386 master-0 kubenswrapper[7271]: W0313 10:36:45.131331 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9e06733a_9c47_4bcf_a5e2_946db8e2714b.slice/crio-8624adf36154fe1f7cdb5c9eb99ed2b301e80e18fd1f6d8154a250c1a73d647b WatchSource:0}: Error finding container 8624adf36154fe1f7cdb5c9eb99ed2b301e80e18fd1f6d8154a250c1a73d647b: Status 404 returned error can't find the container with id 8624adf36154fe1f7cdb5c9eb99ed2b301e80e18fd1f6d8154a250c1a73d647b Mar 13 10:36:45.574610 master-0 kubenswrapper[7271]: I0313 10:36:45.573636 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 13 10:36:45.574610 master-0 kubenswrapper[7271]: E0313 10:36:45.573882 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9847187-75a5-4ae5-9e53-7968725bae6e" containerName="installer" Mar 13 10:36:45.574610 master-0 kubenswrapper[7271]: I0313 10:36:45.573896 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9847187-75a5-4ae5-9e53-7968725bae6e" containerName="installer" Mar 13 10:36:45.574610 master-0 kubenswrapper[7271]: E0313 10:36:45.573908 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf" containerName="installer" Mar 13 10:36:45.574610 master-0 kubenswrapper[7271]: I0313 10:36:45.573916 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf" containerName="installer" Mar 13 10:36:45.574610 master-0 kubenswrapper[7271]: I0313 10:36:45.574004 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9847187-75a5-4ae5-9e53-7968725bae6e" containerName="installer" Mar 13 10:36:45.574610 master-0 kubenswrapper[7271]: I0313 10:36:45.574024 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf" containerName="installer" Mar 13 10:36:45.574610 master-0 kubenswrapper[7271]: I0313 10:36:45.574469 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 10:36:45.585324 master-0 kubenswrapper[7271]: I0313 10:36:45.580146 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 10:36:45.631638 master-0 kubenswrapper[7271]: I0313 10:36:45.631178 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 13 10:36:45.633849 master-0 kubenswrapper[7271]: I0313 10:36:45.633811 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7baf3efc-04dc-4c17-9c2a-397ac022d281-kube-api-access\") pod \"installer-1-master-0\" (UID: \"7baf3efc-04dc-4c17-9c2a-397ac022d281\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 10:36:45.634214 master-0 kubenswrapper[7271]: I0313 10:36:45.634193 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7baf3efc-04dc-4c17-9c2a-397ac022d281-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"7baf3efc-04dc-4c17-9c2a-397ac022d281\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 10:36:45.634341 master-0 kubenswrapper[7271]: I0313 10:36:45.634324 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7baf3efc-04dc-4c17-9c2a-397ac022d281-var-lock\") pod \"installer-1-master-0\" (UID: \"7baf3efc-04dc-4c17-9c2a-397ac022d281\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 10:36:45.735740 master-0 kubenswrapper[7271]: I0313 10:36:45.735539 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7baf3efc-04dc-4c17-9c2a-397ac022d281-kube-api-access\") pod \"installer-1-master-0\" (UID: \"7baf3efc-04dc-4c17-9c2a-397ac022d281\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 10:36:45.735986 master-0 kubenswrapper[7271]: I0313 10:36:45.735778 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7baf3efc-04dc-4c17-9c2a-397ac022d281-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"7baf3efc-04dc-4c17-9c2a-397ac022d281\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 10:36:45.735986 master-0 kubenswrapper[7271]: I0313 10:36:45.735823 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7baf3efc-04dc-4c17-9c2a-397ac022d281-var-lock\") pod \"installer-1-master-0\" (UID: \"7baf3efc-04dc-4c17-9c2a-397ac022d281\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 10:36:45.735986 master-0 kubenswrapper[7271]: I0313 10:36:45.735850 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7baf3efc-04dc-4c17-9c2a-397ac022d281-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"7baf3efc-04dc-4c17-9c2a-397ac022d281\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 10:36:45.736445 master-0 kubenswrapper[7271]: I0313 10:36:45.736052 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7baf3efc-04dc-4c17-9c2a-397ac022d281-var-lock\") pod \"installer-1-master-0\" (UID: \"7baf3efc-04dc-4c17-9c2a-397ac022d281\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 10:36:45.766050 master-0 kubenswrapper[7271]: I0313 10:36:45.765656 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" event={"ID":"6317b62a-46e2-4a45-9c29-cb04c40d4425","Type":"ContainerStarted","Data":"e071f5df1cf13730e7c3a2d7e673c1b7527862b8e1f69ed525efba676776f319"} Mar 13 10:36:45.766568 master-0 kubenswrapper[7271]: I0313 10:36:45.766533 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:45.769306 master-0 kubenswrapper[7271]: I0313 10:36:45.769255 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 10:36:45.770445 master-0 kubenswrapper[7271]: I0313 10:36:45.769975 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"b9847187-75a5-4ae5-9e53-7968725bae6e","Type":"ContainerDied","Data":"7beb708d725500bd51bcb3edd18db25c31d800cb8f2a0303ae85fcd6d4d806af"} Mar 13 10:36:45.770445 master-0 kubenswrapper[7271]: I0313 10:36:45.770044 7271 scope.go:117] "RemoveContainer" containerID="e46a0e87a063e0f3b6564d9616af4ee6a3b34fd616a438be1e7eedb82547f0fd" Mar 13 10:36:45.770445 master-0 kubenswrapper[7271]: I0313 10:36:45.770203 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 10:36:45.775469 master-0 kubenswrapper[7271]: I0313 10:36:45.775326 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:36:45.777891 master-0 kubenswrapper[7271]: I0313 10:36:45.777843 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"9e06733a-9c47-4bcf-a5e2-946db8e2714b","Type":"ContainerStarted","Data":"c87d032f992ab15941d07ccbd459ecd39c5fd54e6df8b197a56c0bc747f7d534"} Mar 13 10:36:45.777891 master-0 kubenswrapper[7271]: I0313 10:36:45.777889 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"9e06733a-9c47-4bcf-a5e2-946db8e2714b","Type":"ContainerStarted","Data":"8624adf36154fe1f7cdb5c9eb99ed2b301e80e18fd1f6d8154a250c1a73d647b"} Mar 13 10:36:45.781274 master-0 kubenswrapper[7271]: I0313 10:36:45.781221 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf","Type":"ContainerDied","Data":"6846571eb7604f88adba4d52809ed29920ec8a6a32a2b601655a0f4b9e49c442"} Mar 13 10:36:45.784023 master-0 kubenswrapper[7271]: I0313 10:36:45.783993 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 10:36:45.784283 master-0 kubenswrapper[7271]: I0313 10:36:45.784258 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:45.784890 master-0 kubenswrapper[7271]: I0313 10:36:45.784863 7271 scope.go:117] "RemoveContainer" containerID="d84eecae63542ba948d53d567f42cad7dd26e9b2bfc0e6b741cc53afc3e9e71f" Mar 13 10:36:45.787662 master-0 kubenswrapper[7271]: I0313 10:36:45.787626 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7baf3efc-04dc-4c17-9c2a-397ac022d281-kube-api-access\") pod \"installer-1-master-0\" (UID: \"7baf3efc-04dc-4c17-9c2a-397ac022d281\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 10:36:45.789847 master-0 kubenswrapper[7271]: I0313 10:36:45.789753 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:36:45.911808 master-0 kubenswrapper[7271]: I0313 10:36:45.911260 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 10:36:45.912823 master-0 kubenswrapper[7271]: I0313 10:36:45.912784 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 10:36:45.924704 master-0 kubenswrapper[7271]: I0313 10:36:45.924627 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 10:36:45.926753 master-0 kubenswrapper[7271]: I0313 10:36:45.926701 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 10:36:45.938078 master-0 kubenswrapper[7271]: I0313 10:36:45.938032 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/feb7b798-15b5-4004-87d0-96ce9381cdbe-kube-api-access\") pod \"installer-1-master-0\" (UID: \"feb7b798-15b5-4004-87d0-96ce9381cdbe\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 10:36:45.938154 master-0 kubenswrapper[7271]: I0313 10:36:45.938089 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/feb7b798-15b5-4004-87d0-96ce9381cdbe-var-lock\") pod \"installer-1-master-0\" (UID: \"feb7b798-15b5-4004-87d0-96ce9381cdbe\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 10:36:45.938154 master-0 kubenswrapper[7271]: I0313 10:36:45.938131 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/feb7b798-15b5-4004-87d0-96ce9381cdbe-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"feb7b798-15b5-4004-87d0-96ce9381cdbe\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 10:36:45.945075 master-0 kubenswrapper[7271]: I0313 10:36:45.945005 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 10:36:46.040122 master-0 kubenswrapper[7271]: I0313 10:36:46.039980 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/feb7b798-15b5-4004-87d0-96ce9381cdbe-kube-api-access\") pod \"installer-1-master-0\" (UID: \"feb7b798-15b5-4004-87d0-96ce9381cdbe\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 10:36:46.040122 master-0 kubenswrapper[7271]: I0313 10:36:46.040041 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/feb7b798-15b5-4004-87d0-96ce9381cdbe-var-lock\") pod \"installer-1-master-0\" (UID: \"feb7b798-15b5-4004-87d0-96ce9381cdbe\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 10:36:46.040122 master-0 kubenswrapper[7271]: I0313 10:36:46.040076 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/feb7b798-15b5-4004-87d0-96ce9381cdbe-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"feb7b798-15b5-4004-87d0-96ce9381cdbe\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 10:36:46.040889 master-0 kubenswrapper[7271]: I0313 10:36:46.040184 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/feb7b798-15b5-4004-87d0-96ce9381cdbe-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"feb7b798-15b5-4004-87d0-96ce9381cdbe\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 10:36:46.040889 master-0 kubenswrapper[7271]: I0313 10:36:46.040460 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/feb7b798-15b5-4004-87d0-96ce9381cdbe-var-lock\") pod \"installer-1-master-0\" (UID: \"feb7b798-15b5-4004-87d0-96ce9381cdbe\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 10:36:46.129955 master-0 kubenswrapper[7271]: I0313 10:36:46.129881 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/feb7b798-15b5-4004-87d0-96ce9381cdbe-kube-api-access\") pod \"installer-1-master-0\" (UID: \"feb7b798-15b5-4004-87d0-96ce9381cdbe\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 10:36:46.224414 master-0 kubenswrapper[7271]: I0313 10:36:46.224355 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 10:36:46.243751 master-0 kubenswrapper[7271]: I0313 10:36:46.237999 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 10:36:46.267574 master-0 kubenswrapper[7271]: I0313 10:36:46.267485 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 10:36:46.611168 master-0 kubenswrapper[7271]: I0313 10:36:46.610222 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 13 10:36:46.631628 master-0 kubenswrapper[7271]: W0313 10:36:46.628455 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7baf3efc_04dc_4c17_9c2a_397ac022d281.slice/crio-13b9f88ad828dc6f0b9caaa000ec4304ee1cea2959cd111893dbdd54815ac13d WatchSource:0}: Error finding container 13b9f88ad828dc6f0b9caaa000ec4304ee1cea2959cd111893dbdd54815ac13d: Status 404 returned error can't find the container with id 13b9f88ad828dc6f0b9caaa000ec4304ee1cea2959cd111893dbdd54815ac13d Mar 13 10:36:46.787523 master-0 kubenswrapper[7271]: I0313 10:36:46.787477 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"7baf3efc-04dc-4c17-9c2a-397ac022d281","Type":"ContainerStarted","Data":"13b9f88ad828dc6f0b9caaa000ec4304ee1cea2959cd111893dbdd54815ac13d"} Mar 13 10:36:46.856559 master-0 kubenswrapper[7271]: I0313 10:36:46.856508 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:36:47.531846 master-0 kubenswrapper[7271]: I0313 10:36:47.531726 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" podStartSLOduration=8.706656879 podStartE2EDuration="15.531695263s" podCreationTimestamp="2026-03-13 10:36:32 +0000 UTC" firstStartedPulling="2026-03-13 10:36:37.820753612 +0000 UTC m=+52.347576002" lastFinishedPulling="2026-03-13 10:36:44.645791996 +0000 UTC m=+59.172614386" observedRunningTime="2026-03-13 10:36:47.525343542 +0000 UTC m=+62.052165942" watchObservedRunningTime="2026-03-13 10:36:47.531695263 +0000 UTC m=+62.058517663" Mar 13 10:36:47.532712 master-0 kubenswrapper[7271]: I0313 10:36:47.532136 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 10:36:47.552424 master-0 kubenswrapper[7271]: W0313 10:36:47.549431 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfeb7b798_15b5_4004_87d0_96ce9381cdbe.slice/crio-54845f97730049024e50483462ec2fdbbd2a3bf95c64c4c162c260a6e6834b4f WatchSource:0}: Error finding container 54845f97730049024e50483462ec2fdbbd2a3bf95c64c4c162c260a6e6834b4f: Status 404 returned error can't find the container with id 54845f97730049024e50483462ec2fdbbd2a3bf95c64c4c162c260a6e6834b4f Mar 13 10:36:47.651056 master-0 kubenswrapper[7271]: I0313 10:36:47.650956 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9847187-75a5-4ae5-9e53-7968725bae6e" path="/var/lib/kubelet/pods/b9847187-75a5-4ae5-9e53-7968725bae6e/volumes" Mar 13 10:36:47.797935 master-0 kubenswrapper[7271]: I0313 10:36:47.797880 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"feb7b798-15b5-4004-87d0-96ce9381cdbe","Type":"ContainerStarted","Data":"54845f97730049024e50483462ec2fdbbd2a3bf95c64c4c162c260a6e6834b4f"} Mar 13 10:36:47.799474 master-0 kubenswrapper[7271]: I0313 10:36:47.799449 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"7baf3efc-04dc-4c17-9c2a-397ac022d281","Type":"ContainerStarted","Data":"56c9b868392613f72b3a821d9f4fd3508fb4759378ef047d1a2286ea13733ed0"} Mar 13 10:36:47.843646 master-0 kubenswrapper[7271]: I0313 10:36:47.841887 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z"] Mar 13 10:36:47.843646 master-0 kubenswrapper[7271]: I0313 10:36:47.842155 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" podUID="4aaf36b4-e556-4723-a624-aa2edc69c301" containerName="cluster-version-operator" containerID="cri-o://1179c8b6381aace6c76f7d879ba341fdb74c5cd38ee738210276acc9b790c25d" gracePeriod=130 Mar 13 10:36:47.877228 master-0 kubenswrapper[7271]: I0313 10:36:47.877138 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=4.877104237 podStartE2EDuration="4.877104237s" podCreationTimestamp="2026-03-13 10:36:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:36:47.87685256 +0000 UTC m=+62.403674950" watchObservedRunningTime="2026-03-13 10:36:47.877104237 +0000 UTC m=+62.403926637" Mar 13 10:36:47.939643 master-0 kubenswrapper[7271]: I0313 10:36:47.937312 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 10:36:47.950912 master-0 kubenswrapper[7271]: I0313 10:36:47.950836 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 10:36:47.999748 master-0 kubenswrapper[7271]: I0313 10:36:47.999557 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=2.999532712 podStartE2EDuration="2.999532712s" podCreationTimestamp="2026-03-13 10:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:36:47.995444801 +0000 UTC m=+62.522267191" watchObservedRunningTime="2026-03-13 10:36:47.999532712 +0000 UTC m=+62.526355102" Mar 13 10:36:48.067489 master-0 kubenswrapper[7271]: I0313 10:36:48.066533 7271 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 10:36:48.067489 master-0 kubenswrapper[7271]: I0313 10:36:48.066889 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" containerID="cri-o://854f2604690570925e6ded05484c1d3ca69a3b566dccd4e395f158c5b0ec2a6b" gracePeriod=30 Mar 13 10:36:48.067489 master-0 kubenswrapper[7271]: I0313 10:36:48.066954 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" containerID="cri-o://fbb8ff7febe388bbeaa88afee0edfc23ebff6c9257eadf9114a42d9cbd2b3ebe" gracePeriod=30 Mar 13 10:36:48.072350 master-0 kubenswrapper[7271]: I0313 10:36:48.072313 7271 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 13 10:36:48.072630 master-0 kubenswrapper[7271]: E0313 10:36:48.072574 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 13 10:36:48.072702 master-0 kubenswrapper[7271]: I0313 10:36:48.072629 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 13 10:36:48.072702 master-0 kubenswrapper[7271]: E0313 10:36:48.072649 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 13 10:36:48.072702 master-0 kubenswrapper[7271]: I0313 10:36:48.072656 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 13 10:36:48.072923 master-0 kubenswrapper[7271]: I0313 10:36:48.072752 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 13 10:36:48.072923 master-0 kubenswrapper[7271]: I0313 10:36:48.072767 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 13 10:36:48.078786 master-0 kubenswrapper[7271]: I0313 10:36:48.078738 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.180048 master-0 kubenswrapper[7271]: I0313 10:36:48.179673 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.180048 master-0 kubenswrapper[7271]: I0313 10:36:48.179782 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.180048 master-0 kubenswrapper[7271]: I0313 10:36:48.179804 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.180048 master-0 kubenswrapper[7271]: I0313 10:36:48.179830 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.180048 master-0 kubenswrapper[7271]: I0313 10:36:48.179860 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.180048 master-0 kubenswrapper[7271]: I0313 10:36:48.179895 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.281297 master-0 kubenswrapper[7271]: I0313 10:36:48.280951 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.281297 master-0 kubenswrapper[7271]: I0313 10:36:48.281109 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.281297 master-0 kubenswrapper[7271]: I0313 10:36:48.281145 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.281297 master-0 kubenswrapper[7271]: I0313 10:36:48.281218 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.281297 master-0 kubenswrapper[7271]: I0313 10:36:48.281242 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.281297 master-0 kubenswrapper[7271]: I0313 10:36:48.281214 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.282060 master-0 kubenswrapper[7271]: I0313 10:36:48.281895 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.282060 master-0 kubenswrapper[7271]: I0313 10:36:48.281967 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.282253 master-0 kubenswrapper[7271]: I0313 10:36:48.282136 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.282253 master-0 kubenswrapper[7271]: I0313 10:36:48.282187 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.282253 master-0 kubenswrapper[7271]: I0313 10:36:48.282216 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.282253 master-0 kubenswrapper[7271]: I0313 10:36:48.282222 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:36:48.486119 master-0 kubenswrapper[7271]: I0313 10:36:48.486085 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zc596" Mar 13 10:36:48.508244 master-0 kubenswrapper[7271]: I0313 10:36:48.508191 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:36:48.585980 master-0 kubenswrapper[7271]: I0313 10:36:48.585893 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-cvo-updatepayloads\") pod \"4aaf36b4-e556-4723-a624-aa2edc69c301\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " Mar 13 10:36:48.585980 master-0 kubenswrapper[7271]: I0313 10:36:48.585985 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") pod \"4aaf36b4-e556-4723-a624-aa2edc69c301\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " Mar 13 10:36:48.586745 master-0 kubenswrapper[7271]: I0313 10:36:48.586022 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-ssl-certs\") pod \"4aaf36b4-e556-4723-a624-aa2edc69c301\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " Mar 13 10:36:48.586745 master-0 kubenswrapper[7271]: I0313 10:36:48.586041 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "4aaf36b4-e556-4723-a624-aa2edc69c301" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:36:48.586745 master-0 kubenswrapper[7271]: I0313 10:36:48.586140 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aaf36b4-e556-4723-a624-aa2edc69c301-service-ca\") pod \"4aaf36b4-e556-4723-a624-aa2edc69c301\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " Mar 13 10:36:48.586745 master-0 kubenswrapper[7271]: I0313 10:36:48.586163 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aaf36b4-e556-4723-a624-aa2edc69c301-kube-api-access\") pod \"4aaf36b4-e556-4723-a624-aa2edc69c301\" (UID: \"4aaf36b4-e556-4723-a624-aa2edc69c301\") " Mar 13 10:36:48.586745 master-0 kubenswrapper[7271]: I0313 10:36:48.586274 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "4aaf36b4-e556-4723-a624-aa2edc69c301" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:36:48.586745 master-0 kubenswrapper[7271]: I0313 10:36:48.586552 7271 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:48.586745 master-0 kubenswrapper[7271]: I0313 10:36:48.586568 7271 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4aaf36b4-e556-4723-a624-aa2edc69c301-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:48.586997 master-0 kubenswrapper[7271]: I0313 10:36:48.586784 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aaf36b4-e556-4723-a624-aa2edc69c301-service-ca" (OuterVolumeSpecName: "service-ca") pod "4aaf36b4-e556-4723-a624-aa2edc69c301" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:36:48.589264 master-0 kubenswrapper[7271]: I0313 10:36:48.589199 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4aaf36b4-e556-4723-a624-aa2edc69c301" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:36:48.590103 master-0 kubenswrapper[7271]: I0313 10:36:48.590049 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aaf36b4-e556-4723-a624-aa2edc69c301-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4aaf36b4-e556-4723-a624-aa2edc69c301" (UID: "4aaf36b4-e556-4723-a624-aa2edc69c301"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:36:48.688353 master-0 kubenswrapper[7271]: I0313 10:36:48.688256 7271 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aaf36b4-e556-4723-a624-aa2edc69c301-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:48.688353 master-0 kubenswrapper[7271]: I0313 10:36:48.688314 7271 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aaf36b4-e556-4723-a624-aa2edc69c301-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:48.688353 master-0 kubenswrapper[7271]: I0313 10:36:48.688324 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aaf36b4-e556-4723-a624-aa2edc69c301-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:48.806312 master-0 kubenswrapper[7271]: I0313 10:36:48.806223 7271 generic.go:334] "Generic (PLEG): container finished" podID="4aaf36b4-e556-4723-a624-aa2edc69c301" containerID="1179c8b6381aace6c76f7d879ba341fdb74c5cd38ee738210276acc9b790c25d" exitCode=0 Mar 13 10:36:48.806691 master-0 kubenswrapper[7271]: I0313 10:36:48.806324 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" Mar 13 10:36:48.806691 master-0 kubenswrapper[7271]: I0313 10:36:48.806312 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" event={"ID":"4aaf36b4-e556-4723-a624-aa2edc69c301","Type":"ContainerDied","Data":"1179c8b6381aace6c76f7d879ba341fdb74c5cd38ee738210276acc9b790c25d"} Mar 13 10:36:48.806691 master-0 kubenswrapper[7271]: I0313 10:36:48.806422 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z" event={"ID":"4aaf36b4-e556-4723-a624-aa2edc69c301","Type":"ContainerDied","Data":"cac6a5bd74eeb0c84d43669700e24c08a9a36b2d9ebb626bfd8e78bd9a500c83"} Mar 13 10:36:48.806691 master-0 kubenswrapper[7271]: I0313 10:36:48.806452 7271 scope.go:117] "RemoveContainer" containerID="1179c8b6381aace6c76f7d879ba341fdb74c5cd38ee738210276acc9b790c25d" Mar 13 10:36:48.809100 master-0 kubenswrapper[7271]: I0313 10:36:48.808661 7271 generic.go:334] "Generic (PLEG): container finished" podID="00e8e251-40d9-458a-92a7-9b2e91dc7359" containerID="ff391d9c59813842d72b9912aea0684a5fa08ec853cdfa9eb1e377087c9747df" exitCode=0 Mar 13 10:36:48.809100 master-0 kubenswrapper[7271]: I0313 10:36:48.808718 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"00e8e251-40d9-458a-92a7-9b2e91dc7359","Type":"ContainerDied","Data":"ff391d9c59813842d72b9912aea0684a5fa08ec853cdfa9eb1e377087c9747df"} Mar 13 10:36:48.809963 master-0 kubenswrapper[7271]: I0313 10:36:48.809914 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"feb7b798-15b5-4004-87d0-96ce9381cdbe","Type":"ContainerStarted","Data":"28aad4d86302888f158c61e3738904f7d878550af4392e7ed53add211247a0cd"} Mar 13 10:36:48.818505 master-0 kubenswrapper[7271]: I0313 10:36:48.818458 7271 scope.go:117] "RemoveContainer" containerID="1179c8b6381aace6c76f7d879ba341fdb74c5cd38ee738210276acc9b790c25d" Mar 13 10:36:48.819094 master-0 kubenswrapper[7271]: E0313 10:36:48.819046 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1179c8b6381aace6c76f7d879ba341fdb74c5cd38ee738210276acc9b790c25d\": container with ID starting with 1179c8b6381aace6c76f7d879ba341fdb74c5cd38ee738210276acc9b790c25d not found: ID does not exist" containerID="1179c8b6381aace6c76f7d879ba341fdb74c5cd38ee738210276acc9b790c25d" Mar 13 10:36:48.819163 master-0 kubenswrapper[7271]: I0313 10:36:48.819111 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1179c8b6381aace6c76f7d879ba341fdb74c5cd38ee738210276acc9b790c25d"} err="failed to get container status \"1179c8b6381aace6c76f7d879ba341fdb74c5cd38ee738210276acc9b790c25d\": rpc error: code = NotFound desc = could not find container \"1179c8b6381aace6c76f7d879ba341fdb74c5cd38ee738210276acc9b790c25d\": container with ID starting with 1179c8b6381aace6c76f7d879ba341fdb74c5cd38ee738210276acc9b790c25d not found: ID does not exist" Mar 13 10:36:49.657076 master-0 kubenswrapper[7271]: I0313 10:36:49.656994 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf" path="/var/lib/kubelet/pods/ee9ccb5b-e38c-45dd-a762-3ece1ffa80bf/volumes" Mar 13 10:36:50.061520 master-0 kubenswrapper[7271]: I0313 10:36:50.061455 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 10:36:50.105757 master-0 kubenswrapper[7271]: I0313 10:36:50.105675 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00e8e251-40d9-458a-92a7-9b2e91dc7359-kube-api-access\") pod \"00e8e251-40d9-458a-92a7-9b2e91dc7359\" (UID: \"00e8e251-40d9-458a-92a7-9b2e91dc7359\") " Mar 13 10:36:50.105757 master-0 kubenswrapper[7271]: I0313 10:36:50.105741 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00e8e251-40d9-458a-92a7-9b2e91dc7359-var-lock\") pod \"00e8e251-40d9-458a-92a7-9b2e91dc7359\" (UID: \"00e8e251-40d9-458a-92a7-9b2e91dc7359\") " Mar 13 10:36:50.106115 master-0 kubenswrapper[7271]: I0313 10:36:50.106003 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00e8e251-40d9-458a-92a7-9b2e91dc7359-var-lock" (OuterVolumeSpecName: "var-lock") pod "00e8e251-40d9-458a-92a7-9b2e91dc7359" (UID: "00e8e251-40d9-458a-92a7-9b2e91dc7359"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:36:50.111247 master-0 kubenswrapper[7271]: I0313 10:36:50.111148 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00e8e251-40d9-458a-92a7-9b2e91dc7359-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "00e8e251-40d9-458a-92a7-9b2e91dc7359" (UID: "00e8e251-40d9-458a-92a7-9b2e91dc7359"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:36:50.206631 master-0 kubenswrapper[7271]: I0313 10:36:50.206545 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00e8e251-40d9-458a-92a7-9b2e91dc7359-kubelet-dir\") pod \"00e8e251-40d9-458a-92a7-9b2e91dc7359\" (UID: \"00e8e251-40d9-458a-92a7-9b2e91dc7359\") " Mar 13 10:36:50.206955 master-0 kubenswrapper[7271]: I0313 10:36:50.206709 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00e8e251-40d9-458a-92a7-9b2e91dc7359-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "00e8e251-40d9-458a-92a7-9b2e91dc7359" (UID: "00e8e251-40d9-458a-92a7-9b2e91dc7359"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:36:50.207010 master-0 kubenswrapper[7271]: I0313 10:36:50.206987 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00e8e251-40d9-458a-92a7-9b2e91dc7359-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:50.207010 master-0 kubenswrapper[7271]: I0313 10:36:50.207005 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/00e8e251-40d9-458a-92a7-9b2e91dc7359-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:50.207103 master-0 kubenswrapper[7271]: I0313 10:36:50.207022 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00e8e251-40d9-458a-92a7-9b2e91dc7359-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:36:50.510273 master-0 kubenswrapper[7271]: I0313 10:36:50.510190 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:36:50.510549 master-0 kubenswrapper[7271]: I0313 10:36:50.510478 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:36:50.510549 master-0 kubenswrapper[7271]: I0313 10:36:50.510514 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:36:50.511286 master-0 kubenswrapper[7271]: I0313 10:36:50.510551 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:36:50.514168 master-0 kubenswrapper[7271]: I0313 10:36:50.514124 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:36:50.514505 master-0 kubenswrapper[7271]: I0313 10:36:50.514473 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:36:50.515841 master-0 kubenswrapper[7271]: I0313 10:36:50.515801 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:36:50.516505 master-0 kubenswrapper[7271]: I0313 10:36:50.516447 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:36:50.612422 master-0 kubenswrapper[7271]: I0313 10:36:50.612332 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:36:50.612422 master-0 kubenswrapper[7271]: I0313 10:36:50.612404 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:36:50.612422 master-0 kubenswrapper[7271]: I0313 10:36:50.612431 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:36:50.612846 master-0 kubenswrapper[7271]: I0313 10:36:50.612481 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:36:50.615753 master-0 kubenswrapper[7271]: I0313 10:36:50.615711 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"multus-admission-controller-8d675b596-d787l\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:36:50.616913 master-0 kubenswrapper[7271]: I0313 10:36:50.616841 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:36:50.617279 master-0 kubenswrapper[7271]: I0313 10:36:50.617224 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:36:50.617873 master-0 kubenswrapper[7271]: I0313 10:36:50.617819 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:36:50.795870 master-0 kubenswrapper[7271]: I0313 10:36:50.795721 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:36:50.796475 master-0 kubenswrapper[7271]: I0313 10:36:50.796053 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:36:50.802274 master-0 kubenswrapper[7271]: I0313 10:36:50.802212 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:36:50.804451 master-0 kubenswrapper[7271]: I0313 10:36:50.804414 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:36:50.808533 master-0 kubenswrapper[7271]: I0313 10:36:50.808465 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:36:50.808817 master-0 kubenswrapper[7271]: I0313 10:36:50.808659 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:36:50.808817 master-0 kubenswrapper[7271]: I0313 10:36:50.808722 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:36:50.809894 master-0 kubenswrapper[7271]: I0313 10:36:50.809834 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:36:50.822109 master-0 kubenswrapper[7271]: I0313 10:36:50.822031 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"00e8e251-40d9-458a-92a7-9b2e91dc7359","Type":"ContainerDied","Data":"b45c64d6449de0fbb67e8c6c87b585367854c2872ab4281e8171784f28b9d333"} Mar 13 10:36:50.822109 master-0 kubenswrapper[7271]: I0313 10:36:50.822082 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 10:36:50.822555 master-0 kubenswrapper[7271]: I0313 10:36:50.822082 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b45c64d6449de0fbb67e8c6c87b585367854c2872ab4281e8171784f28b9d333" Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: I0313 10:36:58.465665 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: [-]etcd failed: reason withheld Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:36:58.465790 master-0 kubenswrapper[7271]: I0313 10:36:58.465749 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:37:01.154692 master-0 kubenswrapper[7271]: E0313 10:37:01.154620 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 10:37:01.155372 master-0 kubenswrapper[7271]: I0313 10:37:01.155246 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 10:37:01.170111 master-0 kubenswrapper[7271]: W0313 10:37:01.170034 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e52bef89f4b50e4590a1719bcc5d7e5.slice/crio-0e5c66499fd5264d8efcd7a04302a8a2b29aa8f0cd7b296d33d3eec121ff09f9 WatchSource:0}: Error finding container 0e5c66499fd5264d8efcd7a04302a8a2b29aa8f0cd7b296d33d3eec121ff09f9: Status 404 returned error can't find the container with id 0e5c66499fd5264d8efcd7a04302a8a2b29aa8f0cd7b296d33d3eec121ff09f9 Mar 13 10:37:01.875995 master-0 kubenswrapper[7271]: I0313 10:37:01.875919 7271 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="b25246b87fe6711f1f7c66db1d40e94041f17222319c643c72a0f13f39f94ce3" exitCode=1 Mar 13 10:37:01.876232 master-0 kubenswrapper[7271]: I0313 10:37:01.876012 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"b25246b87fe6711f1f7c66db1d40e94041f17222319c643c72a0f13f39f94ce3"} Mar 13 10:37:01.876232 master-0 kubenswrapper[7271]: I0313 10:37:01.876058 7271 scope.go:117] "RemoveContainer" containerID="80284c850caf3e93eb3675f42a22bd510ddbac6e27d80f3eae83dafefe028254" Mar 13 10:37:01.876646 master-0 kubenswrapper[7271]: I0313 10:37:01.876627 7271 scope.go:117] "RemoveContainer" containerID="b25246b87fe6711f1f7c66db1d40e94041f17222319c643c72a0f13f39f94ce3" Mar 13 10:37:01.878786 master-0 kubenswrapper[7271]: I0313 10:37:01.878737 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b"} Mar 13 10:37:01.878786 master-0 kubenswrapper[7271]: I0313 10:37:01.878780 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"0e5c66499fd5264d8efcd7a04302a8a2b29aa8f0cd7b296d33d3eec121ff09f9"} Mar 13 10:37:01.880062 master-0 kubenswrapper[7271]: I0313 10:37:01.880029 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-nsg74_282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/openshift-controller-manager-operator/0.log" Mar 13 10:37:01.880135 master-0 kubenswrapper[7271]: I0313 10:37:01.880073 7271 generic.go:334] "Generic (PLEG): container finished" podID="282bc9ff-1bc0-421b-9cd3-d88d7c5e5303" containerID="5c959a07b9cea59f8d22bac12b5ad0b337201cde45ef40482caaae6f05ee2a56" exitCode=1 Mar 13 10:37:01.880135 master-0 kubenswrapper[7271]: I0313 10:37:01.880100 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" event={"ID":"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303","Type":"ContainerDied","Data":"5c959a07b9cea59f8d22bac12b5ad0b337201cde45ef40482caaae6f05ee2a56"} Mar 13 10:37:01.880412 master-0 kubenswrapper[7271]: I0313 10:37:01.880387 7271 scope.go:117] "RemoveContainer" containerID="5c959a07b9cea59f8d22bac12b5ad0b337201cde45ef40482caaae6f05ee2a56" Mar 13 10:37:02.887631 master-0 kubenswrapper[7271]: I0313 10:37:02.887549 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"281e47a8ccfe9b7bd7d1fae86c8e235e63f17e9935336f3e6ad3bed18be23300"} Mar 13 10:37:02.889461 master-0 kubenswrapper[7271]: I0313 10:37:02.889435 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-nsg74_282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/openshift-controller-manager-operator/0.log" Mar 13 10:37:02.889551 master-0 kubenswrapper[7271]: I0313 10:37:02.889501 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" event={"ID":"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303","Type":"ContainerStarted","Data":"53a8fd339624b3a824ba77b1d93455581709099722d103fd93b0ffb255eebf03"} Mar 13 10:37:02.891001 master-0 kubenswrapper[7271]: I0313 10:37:02.890971 7271 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b" exitCode=0 Mar 13 10:37:02.891101 master-0 kubenswrapper[7271]: I0313 10:37:02.891001 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b"} Mar 13 10:37:03.422944 master-0 kubenswrapper[7271]: I0313 10:37:03.422888 7271 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-cwlxw container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Mar 13 10:37:03.423067 master-0 kubenswrapper[7271]: I0313 10:37:03.422969 7271 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" podUID="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Mar 13 10:37:03.795936 master-0 kubenswrapper[7271]: I0313 10:37:03.795769 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:37:03.898423 master-0 kubenswrapper[7271]: I0313 10:37:03.898342 7271 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="3bbb19054cdef32aad8515587717178e7bce7c315eb6bc762119d4e27dd7a9b0" exitCode=1 Mar 13 10:37:03.899289 master-0 kubenswrapper[7271]: I0313 10:37:03.898451 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"3bbb19054cdef32aad8515587717178e7bce7c315eb6bc762119d4e27dd7a9b0"} Mar 13 10:37:03.899289 master-0 kubenswrapper[7271]: I0313 10:37:03.899210 7271 scope.go:117] "RemoveContainer" containerID="3bbb19054cdef32aad8515587717178e7bce7c315eb6bc762119d4e27dd7a9b0" Mar 13 10:37:04.905388 master-0 kubenswrapper[7271]: I0313 10:37:04.905310 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"e29c9e8859ea50a213d1056d538b6a3cc96cdadb35b68c7127f1a2cbb6be6418"} Mar 13 10:37:07.469055 master-0 kubenswrapper[7271]: I0313 10:37:07.468954 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: I0313 10:37:07.471306 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: [-]etcd failed: reason withheld Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:37:07.471365 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:37:07.472085 master-0 kubenswrapper[7271]: I0313 10:37:07.471383 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:37:07.941542 master-0 kubenswrapper[7271]: E0313 10:37:07.941299 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:36:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:36:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:36:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:36:57Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43\\\"],\\\"sizeBytes\\\":438654375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7\\\"],\\\"sizeBytes\\\":411585608},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7\\\"],\\\"sizeBytes\\\":407347126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3\\\"],\\\"sizeBytes\\\":396521759}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:37:08.086816 master-0 kubenswrapper[7271]: E0313 10:37:08.086718 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:37:10.470250 master-0 kubenswrapper[7271]: I0313 10:37:10.470141 7271 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:37:13.422478 master-0 kubenswrapper[7271]: I0313 10:37:13.422413 7271 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-cwlxw container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Mar 13 10:37:13.423005 master-0 kubenswrapper[7271]: I0313 10:37:13.422507 7271 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" podUID="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Mar 13 10:37:15.896867 master-0 kubenswrapper[7271]: E0313 10:37:15.896825 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 10:37:15.961539 master-0 kubenswrapper[7271]: I0313 10:37:15.961381 7271 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="fbb8ff7febe388bbeaa88afee0edfc23ebff6c9257eadf9114a42d9cbd2b3ebe" exitCode=0 Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: I0313 10:37:16.477672 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: [-]etcd failed: reason withheld Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:37:16.477752 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:37:16.478391 master-0 kubenswrapper[7271]: I0313 10:37:16.477766 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:37:16.971036 master-0 kubenswrapper[7271]: I0313 10:37:16.970935 7271 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e" exitCode=0 Mar 13 10:37:16.971036 master-0 kubenswrapper[7271]: I0313 10:37:16.971007 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e"} Mar 13 10:37:17.942916 master-0 kubenswrapper[7271]: E0313 10:37:17.942800 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:37:18.089232 master-0 kubenswrapper[7271]: E0313 10:37:18.087907 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:37:18.179444 master-0 kubenswrapper[7271]: I0313 10:37:18.179342 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 13 10:37:18.179949 master-0 kubenswrapper[7271]: I0313 10:37:18.179510 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:37:18.297336 master-0 kubenswrapper[7271]: I0313 10:37:18.297232 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 13 10:37:18.297788 master-0 kubenswrapper[7271]: I0313 10:37:18.297422 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs" (OuterVolumeSpecName: "certs") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:37:18.297788 master-0 kubenswrapper[7271]: I0313 10:37:18.297490 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 13 10:37:18.297788 master-0 kubenswrapper[7271]: I0313 10:37:18.297641 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir" (OuterVolumeSpecName: "data-dir") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:37:18.298067 master-0 kubenswrapper[7271]: I0313 10:37:18.297833 7271 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 10:37:18.298067 master-0 kubenswrapper[7271]: I0313 10:37:18.297857 7271 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:37:18.995373 master-0 kubenswrapper[7271]: I0313 10:37:18.995297 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 13 10:37:18.995692 master-0 kubenswrapper[7271]: I0313 10:37:18.995392 7271 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="854f2604690570925e6ded05484c1d3ca69a3b566dccd4e395f158c5b0ec2a6b" exitCode=137 Mar 13 10:37:18.995692 master-0 kubenswrapper[7271]: I0313 10:37:18.995475 7271 scope.go:117] "RemoveContainer" containerID="fbb8ff7febe388bbeaa88afee0edfc23ebff6c9257eadf9114a42d9cbd2b3ebe" Mar 13 10:37:18.995692 master-0 kubenswrapper[7271]: I0313 10:37:18.995531 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:37:19.015271 master-0 kubenswrapper[7271]: I0313 10:37:19.015214 7271 scope.go:117] "RemoveContainer" containerID="854f2604690570925e6ded05484c1d3ca69a3b566dccd4e395f158c5b0ec2a6b" Mar 13 10:37:19.031427 master-0 kubenswrapper[7271]: I0313 10:37:19.031377 7271 scope.go:117] "RemoveContainer" containerID="fbb8ff7febe388bbeaa88afee0edfc23ebff6c9257eadf9114a42d9cbd2b3ebe" Mar 13 10:37:19.032113 master-0 kubenswrapper[7271]: E0313 10:37:19.032072 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbb8ff7febe388bbeaa88afee0edfc23ebff6c9257eadf9114a42d9cbd2b3ebe\": container with ID starting with fbb8ff7febe388bbeaa88afee0edfc23ebff6c9257eadf9114a42d9cbd2b3ebe not found: ID does not exist" containerID="fbb8ff7febe388bbeaa88afee0edfc23ebff6c9257eadf9114a42d9cbd2b3ebe" Mar 13 10:37:19.032177 master-0 kubenswrapper[7271]: I0313 10:37:19.032122 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb8ff7febe388bbeaa88afee0edfc23ebff6c9257eadf9114a42d9cbd2b3ebe"} err="failed to get container status \"fbb8ff7febe388bbeaa88afee0edfc23ebff6c9257eadf9114a42d9cbd2b3ebe\": rpc error: code = NotFound desc = could not find container \"fbb8ff7febe388bbeaa88afee0edfc23ebff6c9257eadf9114a42d9cbd2b3ebe\": container with ID starting with fbb8ff7febe388bbeaa88afee0edfc23ebff6c9257eadf9114a42d9cbd2b3ebe not found: ID does not exist" Mar 13 10:37:19.032177 master-0 kubenswrapper[7271]: I0313 10:37:19.032156 7271 scope.go:117] "RemoveContainer" containerID="854f2604690570925e6ded05484c1d3ca69a3b566dccd4e395f158c5b0ec2a6b" Mar 13 10:37:19.032877 master-0 kubenswrapper[7271]: E0313 10:37:19.032821 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"854f2604690570925e6ded05484c1d3ca69a3b566dccd4e395f158c5b0ec2a6b\": container with ID starting with 854f2604690570925e6ded05484c1d3ca69a3b566dccd4e395f158c5b0ec2a6b not found: ID does not exist" containerID="854f2604690570925e6ded05484c1d3ca69a3b566dccd4e395f158c5b0ec2a6b" Mar 13 10:37:19.032938 master-0 kubenswrapper[7271]: I0313 10:37:19.032888 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"854f2604690570925e6ded05484c1d3ca69a3b566dccd4e395f158c5b0ec2a6b"} err="failed to get container status \"854f2604690570925e6ded05484c1d3ca69a3b566dccd4e395f158c5b0ec2a6b\": rpc error: code = NotFound desc = could not find container \"854f2604690570925e6ded05484c1d3ca69a3b566dccd4e395f158c5b0ec2a6b\": container with ID starting with 854f2604690570925e6ded05484c1d3ca69a3b566dccd4e395f158c5b0ec2a6b not found: ID does not exist" Mar 13 10:37:19.369335 master-0 kubenswrapper[7271]: I0313 10:37:19.369260 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:19.370330 master-0 kubenswrapper[7271]: I0313 10:37:19.370167 7271 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:19.442882 master-0 kubenswrapper[7271]: I0313 10:37:19.442770 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:19.442882 master-0 kubenswrapper[7271]: I0313 10:37:19.442850 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:19.651574 master-0 kubenswrapper[7271]: I0313 10:37:19.651336 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354f29997baa583b6238f7de9108ee10" path="/var/lib/kubelet/pods/354f29997baa583b6238f7de9108ee10/volumes" Mar 13 10:37:19.651981 master-0 kubenswrapper[7271]: I0313 10:37:19.651720 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 10:37:20.484509 master-0 kubenswrapper[7271]: I0313 10:37:20.470049 7271 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:37:22.011050 master-0 kubenswrapper[7271]: I0313 10:37:22.011013 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-6vpl4_1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/network-operator/0.log" Mar 13 10:37:22.011505 master-0 kubenswrapper[7271]: I0313 10:37:22.011056 7271 generic.go:334] "Generic (PLEG): container finished" podID="1d5f5440-b10c-40ea-9f1a-5f03babc1bd9" containerID="5e2eaafddd132326dc9e3d7a39739553509b59eb3a4133fcdb22787eb5fde49c" exitCode=255 Mar 13 10:37:22.106710 master-0 kubenswrapper[7271]: E0313 10:37:22.106521 7271 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c60463d247fc8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:36:48.0669122 +0000 UTC m=+62.593734590,LastTimestamp:2026-03-13 10:36:48.0669122 +0000 UTC m=+62.593734590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:37:22.368411 master-0 kubenswrapper[7271]: I0313 10:37:22.368328 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:22.368666 master-0 kubenswrapper[7271]: I0313 10:37:22.368432 7271 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:22.442134 master-0 kubenswrapper[7271]: I0313 10:37:22.442059 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:22.442134 master-0 kubenswrapper[7271]: I0313 10:37:22.442127 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:23.017997 master-0 kubenswrapper[7271]: I0313 10:37:23.017936 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6954c8766d-g8z48_6317b62a-46e2-4a45-9c29-cb04c40d4425/controller-manager/0.log" Mar 13 10:37:23.017997 master-0 kubenswrapper[7271]: I0313 10:37:23.017985 7271 generic.go:334] "Generic (PLEG): container finished" podID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerID="e071f5df1cf13730e7c3a2d7e673c1b7527862b8e1f69ed525efba676776f319" exitCode=255 Mar 13 10:37:23.422152 master-0 kubenswrapper[7271]: I0313 10:37:23.422067 7271 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-cwlxw container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Mar 13 10:37:23.422152 master-0 kubenswrapper[7271]: I0313 10:37:23.422139 7271 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" podUID="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Mar 13 10:37:25.369110 master-0 kubenswrapper[7271]: I0313 10:37:25.369025 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:25.369110 master-0 kubenswrapper[7271]: I0313 10:37:25.369091 7271 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:25.442623 master-0 kubenswrapper[7271]: I0313 10:37:25.442487 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:25.442874 master-0 kubenswrapper[7271]: I0313 10:37:25.442629 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: I0313 10:37:25.483229 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: [-]etcd failed: reason withheld Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:37:25.483340 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:37:25.483884 master-0 kubenswrapper[7271]: I0313 10:37:25.483376 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:37:26.437447 master-0 kubenswrapper[7271]: I0313 10:37:26.437346 7271 patch_prober.go:28] interesting pod/controller-manager-6954c8766d-g8z48 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" start-of-body= Mar 13 10:37:26.438182 master-0 kubenswrapper[7271]: I0313 10:37:26.437452 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" Mar 13 10:37:26.438182 master-0 kubenswrapper[7271]: I0313 10:37:26.437374 7271 patch_prober.go:28] interesting pod/controller-manager-6954c8766d-g8z48 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" start-of-body= Mar 13 10:37:26.438182 master-0 kubenswrapper[7271]: I0313 10:37:26.437653 7271 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" Mar 13 10:37:27.943403 master-0 kubenswrapper[7271]: E0313 10:37:27.943281 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:37:28.089076 master-0 kubenswrapper[7271]: E0313 10:37:28.088960 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:37:28.442564 master-0 kubenswrapper[7271]: I0313 10:37:28.442493 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:28.443015 master-0 kubenswrapper[7271]: I0313 10:37:28.442950 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:29.980685 master-0 kubenswrapper[7271]: E0313 10:37:29.980627 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 10:37:30.057662 master-0 kubenswrapper[7271]: I0313 10:37:30.057492 7271 generic.go:334] "Generic (PLEG): container finished" podID="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" containerID="07efb32e685572e6b4d6844e3569402a8bdfbf11ae0829c85acd5de7788ca4d9" exitCode=0 Mar 13 10:37:30.469478 master-0 kubenswrapper[7271]: I0313 10:37:30.469182 7271 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:37:31.064559 master-0 kubenswrapper[7271]: I0313 10:37:31.064460 7271 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448" exitCode=0 Mar 13 10:37:31.068483 master-0 kubenswrapper[7271]: I0313 10:37:31.068438 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_9e06733a-9c47-4bcf-a5e2-946db8e2714b/installer/0.log" Mar 13 10:37:31.068606 master-0 kubenswrapper[7271]: I0313 10:37:31.068488 7271 generic.go:334] "Generic (PLEG): container finished" podID="9e06733a-9c47-4bcf-a5e2-946db8e2714b" containerID="c87d032f992ab15941d07ccbd459ecd39c5fd54e6df8b197a56c0bc747f7d534" exitCode=1 Mar 13 10:37:31.442445 master-0 kubenswrapper[7271]: I0313 10:37:31.442276 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:31.442445 master-0 kubenswrapper[7271]: I0313 10:37:31.442387 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:32.075627 master-0 kubenswrapper[7271]: I0313 10:37:32.075528 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_7baf3efc-04dc-4c17-9c2a-397ac022d281/installer/0.log" Mar 13 10:37:32.075627 master-0 kubenswrapper[7271]: I0313 10:37:32.075612 7271 generic.go:334] "Generic (PLEG): container finished" podID="7baf3efc-04dc-4c17-9c2a-397ac022d281" containerID="56c9b868392613f72b3a821d9f4fd3508fb4759378ef047d1a2286ea13733ed0" exitCode=1 Mar 13 10:37:34.442461 master-0 kubenswrapper[7271]: I0313 10:37:34.442381 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:34.444206 master-0 kubenswrapper[7271]: I0313 10:37:34.444101 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: I0313 10:37:34.492994 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: [-]etcd failed: reason withheld Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:37:34.493063 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:37:34.494190 master-0 kubenswrapper[7271]: I0313 10:37:34.493073 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:37:36.437114 master-0 kubenswrapper[7271]: I0313 10:37:36.437032 7271 patch_prober.go:28] interesting pod/controller-manager-6954c8766d-g8z48 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" start-of-body= Mar 13 10:37:36.437756 master-0 kubenswrapper[7271]: I0313 10:37:36.437151 7271 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" Mar 13 10:37:36.437756 master-0 kubenswrapper[7271]: I0313 10:37:36.437046 7271 patch_prober.go:28] interesting pod/controller-manager-6954c8766d-g8z48 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" start-of-body= Mar 13 10:37:36.437756 master-0 kubenswrapper[7271]: I0313 10:37:36.437262 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" Mar 13 10:37:37.442100 master-0 kubenswrapper[7271]: I0313 10:37:37.442048 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:37.442767 master-0 kubenswrapper[7271]: I0313 10:37:37.442716 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:37.944042 master-0 kubenswrapper[7271]: E0313 10:37:37.943947 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:37:38.089934 master-0 kubenswrapper[7271]: E0313 10:37:38.089841 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:37:40.442425 master-0 kubenswrapper[7271]: I0313 10:37:40.442331 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:40.443054 master-0 kubenswrapper[7271]: I0313 10:37:40.442441 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:43.126139 master-0 kubenswrapper[7271]: I0313 10:37:43.126075 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-9z8mk_f87662b9-6ac6-44f3-8a16-ff858c2baa91/approver/0.log" Mar 13 10:37:43.126800 master-0 kubenswrapper[7271]: I0313 10:37:43.126459 7271 generic.go:334] "Generic (PLEG): container finished" podID="f87662b9-6ac6-44f3-8a16-ff858c2baa91" containerID="d2e7a9c17281b6d5f7f20fbe7b128af98dc009aec3115a4cb2ebd1a39090d634" exitCode=1 Mar 13 10:37:43.442628 master-0 kubenswrapper[7271]: I0313 10:37:43.442407 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:43.442628 master-0 kubenswrapper[7271]: I0313 10:37:43.442522 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: I0313 10:37:43.498892 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: [-]etcd failed: reason withheld Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:37:43.498975 master-0 kubenswrapper[7271]: I0313 10:37:43.498976 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:37:44.072832 master-0 kubenswrapper[7271]: E0313 10:37:44.072745 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 10:37:46.162899 master-0 kubenswrapper[7271]: I0313 10:37:46.162826 7271 generic.go:334] "Generic (PLEG): container finished" podID="8f9db15a-8854-485b-9863-9cbe5dddd977" containerID="30ed7322c0091d1c760c898b8eeff7c2a46e380aac09f0741b2738a7131c9763" exitCode=0 Mar 13 10:37:46.436723 master-0 kubenswrapper[7271]: I0313 10:37:46.436461 7271 patch_prober.go:28] interesting pod/controller-manager-6954c8766d-g8z48 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" start-of-body= Mar 13 10:37:46.436723 master-0 kubenswrapper[7271]: I0313 10:37:46.436478 7271 patch_prober.go:28] interesting pod/controller-manager-6954c8766d-g8z48 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" start-of-body= Mar 13 10:37:46.436723 master-0 kubenswrapper[7271]: I0313 10:37:46.436712 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" Mar 13 10:37:46.437125 master-0 kubenswrapper[7271]: I0313 10:37:46.436625 7271 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" Mar 13 10:37:46.442145 master-0 kubenswrapper[7271]: I0313 10:37:46.442030 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:46.442314 master-0 kubenswrapper[7271]: I0313 10:37:46.442146 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:47.180124 master-0 kubenswrapper[7271]: I0313 10:37:47.180053 7271 generic.go:334] "Generic (PLEG): container finished" podID="86ae8cb8-72b3-4be6-9feb-ee0c0da42dba" containerID="2c461d42e265a3320bcaee208db9040eedffe39900d9e8aa36490e00a5c604c0" exitCode=0 Mar 13 10:37:47.181661 master-0 kubenswrapper[7271]: I0313 10:37:47.181631 7271 generic.go:334] "Generic (PLEG): container finished" podID="a1a998af-4fc0-4078-a6a0-93dde6c00508" containerID="dbff0a4ca77dfd3c5dce218a106dba837080cd80ee7f274b5ebceb8f682ccabd" exitCode=0 Mar 13 10:37:47.944837 master-0 kubenswrapper[7271]: E0313 10:37:47.944777 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:37:47.944837 master-0 kubenswrapper[7271]: E0313 10:37:47.944831 7271 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:37:48.090802 master-0 kubenswrapper[7271]: E0313 10:37:48.090737 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:37:48.091048 master-0 kubenswrapper[7271]: I0313 10:37:48.091021 7271 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 10:37:48.487565 master-0 kubenswrapper[7271]: I0313 10:37:48.487472 7271 status_manager.go:851] "Failed to get status for pod" podUID="11927952-723f-4d6d-922b-73139abe8877" pod="openshift-dns/dns-default-zc596" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods dns-default-zc596)" Mar 13 10:37:49.441909 master-0 kubenswrapper[7271]: I0313 10:37:49.441838 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:49.442163 master-0 kubenswrapper[7271]: I0313 10:37:49.441913 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:51.623963 master-0 kubenswrapper[7271]: E0313 10:37:51.623910 7271 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 10:37:51.623963 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-operator-677db989d6-tzd9b_openshift-ingress-operator_7667717b-fb74-456b-8615-16475cb69e98_0(fe5b83575010d17977eb80af7b747af376d5a3108479670a917677b44eeb4e37): error adding pod openshift-ingress-operator_ingress-operator-677db989d6-tzd9b to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fe5b83575010d17977eb80af7b747af376d5a3108479670a917677b44eeb4e37" Netns:"/var/run/netns/2061b7a0-3998-49d1-960a-0eb100561bd3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-677db989d6-tzd9b;K8S_POD_INFRA_CONTAINER_ID=fe5b83575010d17977eb80af7b747af376d5a3108479670a917677b44eeb4e37;K8S_POD_UID=7667717b-fb74-456b-8615-16475cb69e98" Path:"" ERRORED: error configuring pod [openshift-ingress-operator/ingress-operator-677db989d6-tzd9b] networking: Multus: [openshift-ingress-operator/ingress-operator-677db989d6-tzd9b/7667717b-fb74-456b-8615-16475cb69e98]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ingress-operator-677db989d6-tzd9b in out of cluster comm: SetNetworkStatus: failed to update the pod ingress-operator-677db989d6-tzd9b in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-677db989d6-tzd9b?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:51.623963 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:51.623963 master-0 kubenswrapper[7271]: > Mar 13 10:37:51.624508 master-0 kubenswrapper[7271]: E0313 10:37:51.624481 7271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 10:37:51.624508 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-operator-677db989d6-tzd9b_openshift-ingress-operator_7667717b-fb74-456b-8615-16475cb69e98_0(fe5b83575010d17977eb80af7b747af376d5a3108479670a917677b44eeb4e37): error adding pod openshift-ingress-operator_ingress-operator-677db989d6-tzd9b to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fe5b83575010d17977eb80af7b747af376d5a3108479670a917677b44eeb4e37" Netns:"/var/run/netns/2061b7a0-3998-49d1-960a-0eb100561bd3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-677db989d6-tzd9b;K8S_POD_INFRA_CONTAINER_ID=fe5b83575010d17977eb80af7b747af376d5a3108479670a917677b44eeb4e37;K8S_POD_UID=7667717b-fb74-456b-8615-16475cb69e98" Path:"" ERRORED: error configuring pod [openshift-ingress-operator/ingress-operator-677db989d6-tzd9b] networking: Multus: [openshift-ingress-operator/ingress-operator-677db989d6-tzd9b/7667717b-fb74-456b-8615-16475cb69e98]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ingress-operator-677db989d6-tzd9b in out of cluster comm: SetNetworkStatus: failed to update the pod ingress-operator-677db989d6-tzd9b in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-677db989d6-tzd9b?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:51.624508 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:51.624508 master-0 kubenswrapper[7271]: > pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:37:51.624845 master-0 kubenswrapper[7271]: E0313 10:37:51.624701 7271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 10:37:51.624845 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-operator-677db989d6-tzd9b_openshift-ingress-operator_7667717b-fb74-456b-8615-16475cb69e98_0(fe5b83575010d17977eb80af7b747af376d5a3108479670a917677b44eeb4e37): error adding pod openshift-ingress-operator_ingress-operator-677db989d6-tzd9b to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fe5b83575010d17977eb80af7b747af376d5a3108479670a917677b44eeb4e37" Netns:"/var/run/netns/2061b7a0-3998-49d1-960a-0eb100561bd3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-677db989d6-tzd9b;K8S_POD_INFRA_CONTAINER_ID=fe5b83575010d17977eb80af7b747af376d5a3108479670a917677b44eeb4e37;K8S_POD_UID=7667717b-fb74-456b-8615-16475cb69e98" Path:"" ERRORED: error configuring pod [openshift-ingress-operator/ingress-operator-677db989d6-tzd9b] networking: Multus: [openshift-ingress-operator/ingress-operator-677db989d6-tzd9b/7667717b-fb74-456b-8615-16475cb69e98]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ingress-operator-677db989d6-tzd9b in out of cluster comm: SetNetworkStatus: failed to update the pod ingress-operator-677db989d6-tzd9b in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-677db989d6-tzd9b?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:51.624845 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:51.624845 master-0 kubenswrapper[7271]: > pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:37:51.624845 master-0 kubenswrapper[7271]: E0313 10:37:51.624781 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-operator-677db989d6-tzd9b_openshift-ingress-operator_7667717b-fb74-456b-8615-16475cb69e98_0(fe5b83575010d17977eb80af7b747af376d5a3108479670a917677b44eeb4e37): error adding pod openshift-ingress-operator_ingress-operator-677db989d6-tzd9b to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"fe5b83575010d17977eb80af7b747af376d5a3108479670a917677b44eeb4e37\\\" Netns:\\\"/var/run/netns/2061b7a0-3998-49d1-960a-0eb100561bd3\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-677db989d6-tzd9b;K8S_POD_INFRA_CONTAINER_ID=fe5b83575010d17977eb80af7b747af376d5a3108479670a917677b44eeb4e37;K8S_POD_UID=7667717b-fb74-456b-8615-16475cb69e98\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-ingress-operator/ingress-operator-677db989d6-tzd9b] networking: Multus: [openshift-ingress-operator/ingress-operator-677db989d6-tzd9b/7667717b-fb74-456b-8615-16475cb69e98]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ingress-operator-677db989d6-tzd9b in out of cluster comm: SetNetworkStatus: failed to update the pod ingress-operator-677db989d6-tzd9b in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-677db989d6-tzd9b?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:37:51.703615 master-0 kubenswrapper[7271]: E0313 10:37:51.703552 7271 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 10:37:51.703615 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-674cbfbd9d-vk9qz_openshift-monitoring_4d5479f3-51ec-4b93-8188-21cdda44828d_0(be6f60c6c4b83459cda1ecef33904cac01ab5fad1395581ffcc6a11e31443832): error adding pod openshift-monitoring_cluster-monitoring-operator-674cbfbd9d-vk9qz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"be6f60c6c4b83459cda1ecef33904cac01ab5fad1395581ffcc6a11e31443832" Netns:"/var/run/netns/67a540f5-f9d9-4abb-9242-bdb6ed4a8791" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-674cbfbd9d-vk9qz;K8S_POD_INFRA_CONTAINER_ID=be6f60c6c4b83459cda1ecef33904cac01ab5fad1395581ffcc6a11e31443832;K8S_POD_UID=4d5479f3-51ec-4b93-8188-21cdda44828d" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz/4d5479f3-51ec-4b93-8188-21cdda44828d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-674cbfbd9d-vk9qz in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-674cbfbd9d-vk9qz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-674cbfbd9d-vk9qz?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:51.703615 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:51.703615 master-0 kubenswrapper[7271]: > Mar 13 10:37:51.703842 master-0 kubenswrapper[7271]: E0313 10:37:51.703645 7271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 10:37:51.703842 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-674cbfbd9d-vk9qz_openshift-monitoring_4d5479f3-51ec-4b93-8188-21cdda44828d_0(be6f60c6c4b83459cda1ecef33904cac01ab5fad1395581ffcc6a11e31443832): error adding pod openshift-monitoring_cluster-monitoring-operator-674cbfbd9d-vk9qz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"be6f60c6c4b83459cda1ecef33904cac01ab5fad1395581ffcc6a11e31443832" Netns:"/var/run/netns/67a540f5-f9d9-4abb-9242-bdb6ed4a8791" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-674cbfbd9d-vk9qz;K8S_POD_INFRA_CONTAINER_ID=be6f60c6c4b83459cda1ecef33904cac01ab5fad1395581ffcc6a11e31443832;K8S_POD_UID=4d5479f3-51ec-4b93-8188-21cdda44828d" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz/4d5479f3-51ec-4b93-8188-21cdda44828d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-674cbfbd9d-vk9qz in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-674cbfbd9d-vk9qz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-674cbfbd9d-vk9qz?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:51.703842 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:51.703842 master-0 kubenswrapper[7271]: > pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:37:51.703842 master-0 kubenswrapper[7271]: E0313 10:37:51.703669 7271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 10:37:51.703842 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-674cbfbd9d-vk9qz_openshift-monitoring_4d5479f3-51ec-4b93-8188-21cdda44828d_0(be6f60c6c4b83459cda1ecef33904cac01ab5fad1395581ffcc6a11e31443832): error adding pod openshift-monitoring_cluster-monitoring-operator-674cbfbd9d-vk9qz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"be6f60c6c4b83459cda1ecef33904cac01ab5fad1395581ffcc6a11e31443832" Netns:"/var/run/netns/67a540f5-f9d9-4abb-9242-bdb6ed4a8791" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-674cbfbd9d-vk9qz;K8S_POD_INFRA_CONTAINER_ID=be6f60c6c4b83459cda1ecef33904cac01ab5fad1395581ffcc6a11e31443832;K8S_POD_UID=4d5479f3-51ec-4b93-8188-21cdda44828d" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz/4d5479f3-51ec-4b93-8188-21cdda44828d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-674cbfbd9d-vk9qz in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-674cbfbd9d-vk9qz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-674cbfbd9d-vk9qz?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:51.703842 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:51.703842 master-0 kubenswrapper[7271]: > pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:37:51.703842 master-0 kubenswrapper[7271]: E0313 10:37:51.703736 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-monitoring-operator-674cbfbd9d-vk9qz_openshift-monitoring(4d5479f3-51ec-4b93-8188-21cdda44828d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-monitoring-operator-674cbfbd9d-vk9qz_openshift-monitoring(4d5479f3-51ec-4b93-8188-21cdda44828d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-674cbfbd9d-vk9qz_openshift-monitoring_4d5479f3-51ec-4b93-8188-21cdda44828d_0(be6f60c6c4b83459cda1ecef33904cac01ab5fad1395581ffcc6a11e31443832): error adding pod openshift-monitoring_cluster-monitoring-operator-674cbfbd9d-vk9qz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"be6f60c6c4b83459cda1ecef33904cac01ab5fad1395581ffcc6a11e31443832\\\" Netns:\\\"/var/run/netns/67a540f5-f9d9-4abb-9242-bdb6ed4a8791\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-674cbfbd9d-vk9qz;K8S_POD_INFRA_CONTAINER_ID=be6f60c6c4b83459cda1ecef33904cac01ab5fad1395581ffcc6a11e31443832;K8S_POD_UID=4d5479f3-51ec-4b93-8188-21cdda44828d\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz/4d5479f3-51ec-4b93-8188-21cdda44828d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-674cbfbd9d-vk9qz in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-674cbfbd9d-vk9qz in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-674cbfbd9d-vk9qz?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" podUID="4d5479f3-51ec-4b93-8188-21cdda44828d" Mar 13 10:37:52.016391 master-0 kubenswrapper[7271]: E0313 10:37:52.016323 7271 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 10:37:52.016391 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-7d9c49f57b-2j5jl_openshift-operator-lifecycle-manager_c455a959-d764-4b4f-a1e0-95c27495dd9d_0(abc2d94a2830757ad54703560beae03da352ca64ea83e36ac972e407d84b7178): error adding pod openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-2j5jl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"abc2d94a2830757ad54703560beae03da352ca64ea83e36ac972e407d84b7178" Netns:"/var/run/netns/bf3e5f36-d656-43df-8516-223b95053d95" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-7d9c49f57b-2j5jl;K8S_POD_INFRA_CONTAINER_ID=abc2d94a2830757ad54703560beae03da352ca64ea83e36ac972e407d84b7178;K8S_POD_UID=c455a959-d764-4b4f-a1e0-95c27495dd9d" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl/c455a959-d764-4b4f-a1e0-95c27495dd9d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-7d9c49f57b-2j5jl in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-7d9c49f57b-2j5jl in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-7d9c49f57b-2j5jl?timeout=1m0s": context deadline exceeded Mar 13 10:37:52.016391 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.016391 master-0 kubenswrapper[7271]: > Mar 13 10:37:52.016607 master-0 kubenswrapper[7271]: E0313 10:37:52.016432 7271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 10:37:52.016607 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-7d9c49f57b-2j5jl_openshift-operator-lifecycle-manager_c455a959-d764-4b4f-a1e0-95c27495dd9d_0(abc2d94a2830757ad54703560beae03da352ca64ea83e36ac972e407d84b7178): error adding pod openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-2j5jl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"abc2d94a2830757ad54703560beae03da352ca64ea83e36ac972e407d84b7178" Netns:"/var/run/netns/bf3e5f36-d656-43df-8516-223b95053d95" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-7d9c49f57b-2j5jl;K8S_POD_INFRA_CONTAINER_ID=abc2d94a2830757ad54703560beae03da352ca64ea83e36ac972e407d84b7178;K8S_POD_UID=c455a959-d764-4b4f-a1e0-95c27495dd9d" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl/c455a959-d764-4b4f-a1e0-95c27495dd9d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-7d9c49f57b-2j5jl in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-7d9c49f57b-2j5jl in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-7d9c49f57b-2j5jl?timeout=1m0s": context deadline exceeded Mar 13 10:37:52.016607 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.016607 master-0 kubenswrapper[7271]: > pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:37:52.016607 master-0 kubenswrapper[7271]: E0313 10:37:52.016486 7271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 10:37:52.016607 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-7d9c49f57b-2j5jl_openshift-operator-lifecycle-manager_c455a959-d764-4b4f-a1e0-95c27495dd9d_0(abc2d94a2830757ad54703560beae03da352ca64ea83e36ac972e407d84b7178): error adding pod openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-2j5jl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"abc2d94a2830757ad54703560beae03da352ca64ea83e36ac972e407d84b7178" Netns:"/var/run/netns/bf3e5f36-d656-43df-8516-223b95053d95" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-7d9c49f57b-2j5jl;K8S_POD_INFRA_CONTAINER_ID=abc2d94a2830757ad54703560beae03da352ca64ea83e36ac972e407d84b7178;K8S_POD_UID=c455a959-d764-4b4f-a1e0-95c27495dd9d" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl/c455a959-d764-4b4f-a1e0-95c27495dd9d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-7d9c49f57b-2j5jl in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-7d9c49f57b-2j5jl in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-7d9c49f57b-2j5jl?timeout=1m0s": context deadline exceeded Mar 13 10:37:52.016607 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.016607 master-0 kubenswrapper[7271]: > pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:37:52.016929 master-0 kubenswrapper[7271]: E0313 10:37:52.016576 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"catalog-operator-7d9c49f57b-2j5jl_openshift-operator-lifecycle-manager(c455a959-d764-4b4f-a1e0-95c27495dd9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"catalog-operator-7d9c49f57b-2j5jl_openshift-operator-lifecycle-manager(c455a959-d764-4b4f-a1e0-95c27495dd9d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-7d9c49f57b-2j5jl_openshift-operator-lifecycle-manager_c455a959-d764-4b4f-a1e0-95c27495dd9d_0(abc2d94a2830757ad54703560beae03da352ca64ea83e36ac972e407d84b7178): error adding pod openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-2j5jl to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"abc2d94a2830757ad54703560beae03da352ca64ea83e36ac972e407d84b7178\\\" Netns:\\\"/var/run/netns/bf3e5f36-d656-43df-8516-223b95053d95\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-7d9c49f57b-2j5jl;K8S_POD_INFRA_CONTAINER_ID=abc2d94a2830757ad54703560beae03da352ca64ea83e36ac972e407d84b7178;K8S_POD_UID=c455a959-d764-4b4f-a1e0-95c27495dd9d\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl/c455a959-d764-4b4f-a1e0-95c27495dd9d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-7d9c49f57b-2j5jl in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-7d9c49f57b-2j5jl in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-7d9c49f57b-2j5jl?timeout=1m0s\\\": context deadline exceeded\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" podUID="c455a959-d764-4b4f-a1e0-95c27495dd9d" Mar 13 10:37:52.101712 master-0 kubenswrapper[7271]: E0313 10:37:52.096894 7271 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 10:37:52.101712 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-854648ff6d-d5b45_openshift-operator-lifecycle-manager_8a305f45-8689-45a8-8c8b-5954f2c863df_0(6cb81f4c72da50ad600ac5e03735b6bbe723a42675e865f0b69a00dc8d11db74): error adding pod openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-d5b45 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6cb81f4c72da50ad600ac5e03735b6bbe723a42675e865f0b69a00dc8d11db74" Netns:"/var/run/netns/eda6c5ea-7822-49b0-8817-36bc270b04e1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-854648ff6d-d5b45;K8S_POD_INFRA_CONTAINER_ID=6cb81f4c72da50ad600ac5e03735b6bbe723a42675e865f0b69a00dc8d11db74;K8S_POD_UID=8a305f45-8689-45a8-8c8b-5954f2c863df" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45/8a305f45-8689-45a8-8c8b-5954f2c863df]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-854648ff6d-d5b45 in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-854648ff6d-d5b45 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-854648ff6d-d5b45?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.101712 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.101712 master-0 kubenswrapper[7271]: > Mar 13 10:37:52.101712 master-0 kubenswrapper[7271]: E0313 10:37:52.097000 7271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 10:37:52.101712 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-854648ff6d-d5b45_openshift-operator-lifecycle-manager_8a305f45-8689-45a8-8c8b-5954f2c863df_0(6cb81f4c72da50ad600ac5e03735b6bbe723a42675e865f0b69a00dc8d11db74): error adding pod openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-d5b45 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6cb81f4c72da50ad600ac5e03735b6bbe723a42675e865f0b69a00dc8d11db74" Netns:"/var/run/netns/eda6c5ea-7822-49b0-8817-36bc270b04e1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-854648ff6d-d5b45;K8S_POD_INFRA_CONTAINER_ID=6cb81f4c72da50ad600ac5e03735b6bbe723a42675e865f0b69a00dc8d11db74;K8S_POD_UID=8a305f45-8689-45a8-8c8b-5954f2c863df" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45/8a305f45-8689-45a8-8c8b-5954f2c863df]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-854648ff6d-d5b45 in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-854648ff6d-d5b45 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-854648ff6d-d5b45?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.101712 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.101712 master-0 kubenswrapper[7271]: > pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:37:52.101712 master-0 kubenswrapper[7271]: E0313 10:37:52.097022 7271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 10:37:52.101712 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-854648ff6d-d5b45_openshift-operator-lifecycle-manager_8a305f45-8689-45a8-8c8b-5954f2c863df_0(6cb81f4c72da50ad600ac5e03735b6bbe723a42675e865f0b69a00dc8d11db74): error adding pod openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-d5b45 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6cb81f4c72da50ad600ac5e03735b6bbe723a42675e865f0b69a00dc8d11db74" Netns:"/var/run/netns/eda6c5ea-7822-49b0-8817-36bc270b04e1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-854648ff6d-d5b45;K8S_POD_INFRA_CONTAINER_ID=6cb81f4c72da50ad600ac5e03735b6bbe723a42675e865f0b69a00dc8d11db74;K8S_POD_UID=8a305f45-8689-45a8-8c8b-5954f2c863df" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45/8a305f45-8689-45a8-8c8b-5954f2c863df]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-854648ff6d-d5b45 in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-854648ff6d-d5b45 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-854648ff6d-d5b45?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.101712 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.101712 master-0 kubenswrapper[7271]: > pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:37:52.101712 master-0 kubenswrapper[7271]: E0313 10:37:52.097127 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"package-server-manager-854648ff6d-d5b45_openshift-operator-lifecycle-manager(8a305f45-8689-45a8-8c8b-5954f2c863df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"package-server-manager-854648ff6d-d5b45_openshift-operator-lifecycle-manager(8a305f45-8689-45a8-8c8b-5954f2c863df)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-854648ff6d-d5b45_openshift-operator-lifecycle-manager_8a305f45-8689-45a8-8c8b-5954f2c863df_0(6cb81f4c72da50ad600ac5e03735b6bbe723a42675e865f0b69a00dc8d11db74): error adding pod openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-d5b45 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"6cb81f4c72da50ad600ac5e03735b6bbe723a42675e865f0b69a00dc8d11db74\\\" Netns:\\\"/var/run/netns/eda6c5ea-7822-49b0-8817-36bc270b04e1\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-854648ff6d-d5b45;K8S_POD_INFRA_CONTAINER_ID=6cb81f4c72da50ad600ac5e03735b6bbe723a42675e865f0b69a00dc8d11db74;K8S_POD_UID=8a305f45-8689-45a8-8c8b-5954f2c863df\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45/8a305f45-8689-45a8-8c8b-5954f2c863df]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-854648ff6d-d5b45 in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-854648ff6d-d5b45 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-854648ff6d-d5b45?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" podUID="8a305f45-8689-45a8-8c8b-5954f2c863df" Mar 13 10:37:52.103241 master-0 kubenswrapper[7271]: E0313 10:37:52.102226 7271 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 10:37:52.103241 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-jz2lp_openshift-multus_79bb87a4-8834-4c73-834e-356ccc1f7f9b_0(4c2343ef56c5eef33e11d6a7f0a4541b3f9bb8c97d996a8594329dbe4e82f270): error adding pod openshift-multus_network-metrics-daemon-jz2lp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4c2343ef56c5eef33e11d6a7f0a4541b3f9bb8c97d996a8594329dbe4e82f270" Netns:"/var/run/netns/b161b180-69d5-4fba-8ea8-9bd61c5e0454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-jz2lp;K8S_POD_INFRA_CONTAINER_ID=4c2343ef56c5eef33e11d6a7f0a4541b3f9bb8c97d996a8594329dbe4e82f270;K8S_POD_UID=79bb87a4-8834-4c73-834e-356ccc1f7f9b" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-jz2lp] networking: Multus: [openshift-multus/network-metrics-daemon-jz2lp/79bb87a4-8834-4c73-834e-356ccc1f7f9b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-jz2lp in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-jz2lp in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-jz2lp?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.103241 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.103241 master-0 kubenswrapper[7271]: > Mar 13 10:37:52.103241 master-0 kubenswrapper[7271]: E0313 10:37:52.102308 7271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 10:37:52.103241 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-jz2lp_openshift-multus_79bb87a4-8834-4c73-834e-356ccc1f7f9b_0(4c2343ef56c5eef33e11d6a7f0a4541b3f9bb8c97d996a8594329dbe4e82f270): error adding pod openshift-multus_network-metrics-daemon-jz2lp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4c2343ef56c5eef33e11d6a7f0a4541b3f9bb8c97d996a8594329dbe4e82f270" Netns:"/var/run/netns/b161b180-69d5-4fba-8ea8-9bd61c5e0454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-jz2lp;K8S_POD_INFRA_CONTAINER_ID=4c2343ef56c5eef33e11d6a7f0a4541b3f9bb8c97d996a8594329dbe4e82f270;K8S_POD_UID=79bb87a4-8834-4c73-834e-356ccc1f7f9b" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-jz2lp] networking: Multus: [openshift-multus/network-metrics-daemon-jz2lp/79bb87a4-8834-4c73-834e-356ccc1f7f9b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-jz2lp in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-jz2lp in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-jz2lp?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.103241 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.103241 master-0 kubenswrapper[7271]: > pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:37:52.103241 master-0 kubenswrapper[7271]: E0313 10:37:52.102334 7271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 10:37:52.103241 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-jz2lp_openshift-multus_79bb87a4-8834-4c73-834e-356ccc1f7f9b_0(4c2343ef56c5eef33e11d6a7f0a4541b3f9bb8c97d996a8594329dbe4e82f270): error adding pod openshift-multus_network-metrics-daemon-jz2lp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4c2343ef56c5eef33e11d6a7f0a4541b3f9bb8c97d996a8594329dbe4e82f270" Netns:"/var/run/netns/b161b180-69d5-4fba-8ea8-9bd61c5e0454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-jz2lp;K8S_POD_INFRA_CONTAINER_ID=4c2343ef56c5eef33e11d6a7f0a4541b3f9bb8c97d996a8594329dbe4e82f270;K8S_POD_UID=79bb87a4-8834-4c73-834e-356ccc1f7f9b" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-jz2lp] networking: Multus: [openshift-multus/network-metrics-daemon-jz2lp/79bb87a4-8834-4c73-834e-356ccc1f7f9b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-jz2lp in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-jz2lp in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-jz2lp?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.103241 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.103241 master-0 kubenswrapper[7271]: > pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:37:52.103241 master-0 kubenswrapper[7271]: E0313 10:37:52.102394 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-jz2lp_openshift-multus(79bb87a4-8834-4c73-834e-356ccc1f7f9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-jz2lp_openshift-multus(79bb87a4-8834-4c73-834e-356ccc1f7f9b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-jz2lp_openshift-multus_79bb87a4-8834-4c73-834e-356ccc1f7f9b_0(4c2343ef56c5eef33e11d6a7f0a4541b3f9bb8c97d996a8594329dbe4e82f270): error adding pod openshift-multus_network-metrics-daemon-jz2lp to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"4c2343ef56c5eef33e11d6a7f0a4541b3f9bb8c97d996a8594329dbe4e82f270\\\" Netns:\\\"/var/run/netns/b161b180-69d5-4fba-8ea8-9bd61c5e0454\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-jz2lp;K8S_POD_INFRA_CONTAINER_ID=4c2343ef56c5eef33e11d6a7f0a4541b3f9bb8c97d996a8594329dbe4e82f270;K8S_POD_UID=79bb87a4-8834-4c73-834e-356ccc1f7f9b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-jz2lp] networking: Multus: [openshift-multus/network-metrics-daemon-jz2lp/79bb87a4-8834-4c73-834e-356ccc1f7f9b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-jz2lp in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-jz2lp in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-jz2lp?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-multus/network-metrics-daemon-jz2lp" podUID="79bb87a4-8834-4c73-834e-356ccc1f7f9b" Mar 13 10:37:52.107086 master-0 kubenswrapper[7271]: E0313 10:37:52.107030 7271 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 10:37:52.107086 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-64bf9778cb-85x6d_openshift-marketplace_66f49a19-0e3b-4611-b8a6-5f5687fa20b6_0(3f5b5ad6f2da7679420274243bc7521134e88110eaf92cc8ef75034f18d8b29b): error adding pod openshift-marketplace_marketplace-operator-64bf9778cb-85x6d to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3f5b5ad6f2da7679420274243bc7521134e88110eaf92cc8ef75034f18d8b29b" Netns:"/var/run/netns/141496b5-32fb-4045-bc60-4ec3735d1301" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-64bf9778cb-85x6d;K8S_POD_INFRA_CONTAINER_ID=3f5b5ad6f2da7679420274243bc7521134e88110eaf92cc8ef75034f18d8b29b;K8S_POD_UID=66f49a19-0e3b-4611-b8a6-5f5687fa20b6" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-64bf9778cb-85x6d] networking: Multus: [openshift-marketplace/marketplace-operator-64bf9778cb-85x6d/66f49a19-0e3b-4611-b8a6-5f5687fa20b6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-64bf9778cb-85x6d in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-64bf9778cb-85x6d in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-64bf9778cb-85x6d?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.107086 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.107086 master-0 kubenswrapper[7271]: > Mar 13 10:37:52.107242 master-0 kubenswrapper[7271]: E0313 10:37:52.107110 7271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 10:37:52.107242 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-64bf9778cb-85x6d_openshift-marketplace_66f49a19-0e3b-4611-b8a6-5f5687fa20b6_0(3f5b5ad6f2da7679420274243bc7521134e88110eaf92cc8ef75034f18d8b29b): error adding pod openshift-marketplace_marketplace-operator-64bf9778cb-85x6d to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3f5b5ad6f2da7679420274243bc7521134e88110eaf92cc8ef75034f18d8b29b" Netns:"/var/run/netns/141496b5-32fb-4045-bc60-4ec3735d1301" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-64bf9778cb-85x6d;K8S_POD_INFRA_CONTAINER_ID=3f5b5ad6f2da7679420274243bc7521134e88110eaf92cc8ef75034f18d8b29b;K8S_POD_UID=66f49a19-0e3b-4611-b8a6-5f5687fa20b6" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-64bf9778cb-85x6d] networking: Multus: [openshift-marketplace/marketplace-operator-64bf9778cb-85x6d/66f49a19-0e3b-4611-b8a6-5f5687fa20b6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-64bf9778cb-85x6d in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-64bf9778cb-85x6d in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-64bf9778cb-85x6d?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.107242 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.107242 master-0 kubenswrapper[7271]: > pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:37:52.107242 master-0 kubenswrapper[7271]: E0313 10:37:52.107134 7271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 10:37:52.107242 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-64bf9778cb-85x6d_openshift-marketplace_66f49a19-0e3b-4611-b8a6-5f5687fa20b6_0(3f5b5ad6f2da7679420274243bc7521134e88110eaf92cc8ef75034f18d8b29b): error adding pod openshift-marketplace_marketplace-operator-64bf9778cb-85x6d to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3f5b5ad6f2da7679420274243bc7521134e88110eaf92cc8ef75034f18d8b29b" Netns:"/var/run/netns/141496b5-32fb-4045-bc60-4ec3735d1301" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-64bf9778cb-85x6d;K8S_POD_INFRA_CONTAINER_ID=3f5b5ad6f2da7679420274243bc7521134e88110eaf92cc8ef75034f18d8b29b;K8S_POD_UID=66f49a19-0e3b-4611-b8a6-5f5687fa20b6" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-64bf9778cb-85x6d] networking: Multus: [openshift-marketplace/marketplace-operator-64bf9778cb-85x6d/66f49a19-0e3b-4611-b8a6-5f5687fa20b6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-64bf9778cb-85x6d in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-64bf9778cb-85x6d in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-64bf9778cb-85x6d?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.107242 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.107242 master-0 kubenswrapper[7271]: > pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:37:52.107542 master-0 kubenswrapper[7271]: E0313 10:37:52.107208 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"marketplace-operator-64bf9778cb-85x6d_openshift-marketplace(66f49a19-0e3b-4611-b8a6-5f5687fa20b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"marketplace-operator-64bf9778cb-85x6d_openshift-marketplace(66f49a19-0e3b-4611-b8a6-5f5687fa20b6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-64bf9778cb-85x6d_openshift-marketplace_66f49a19-0e3b-4611-b8a6-5f5687fa20b6_0(3f5b5ad6f2da7679420274243bc7521134e88110eaf92cc8ef75034f18d8b29b): error adding pod openshift-marketplace_marketplace-operator-64bf9778cb-85x6d to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"3f5b5ad6f2da7679420274243bc7521134e88110eaf92cc8ef75034f18d8b29b\\\" Netns:\\\"/var/run/netns/141496b5-32fb-4045-bc60-4ec3735d1301\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-64bf9778cb-85x6d;K8S_POD_INFRA_CONTAINER_ID=3f5b5ad6f2da7679420274243bc7521134e88110eaf92cc8ef75034f18d8b29b;K8S_POD_UID=66f49a19-0e3b-4611-b8a6-5f5687fa20b6\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-64bf9778cb-85x6d] networking: Multus: [openshift-marketplace/marketplace-operator-64bf9778cb-85x6d/66f49a19-0e3b-4611-b8a6-5f5687fa20b6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-64bf9778cb-85x6d in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-64bf9778cb-85x6d in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-64bf9778cb-85x6d?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" podUID="66f49a19-0e3b-4611-b8a6-5f5687fa20b6" Mar 13 10:37:52.110776 master-0 kubenswrapper[7271]: E0313 10:37:52.110712 7271 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 10:37:52.110776 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-8d675b596-d787l_openshift-multus_95339220-324d-45e7-bdc2-e4f42fbd1d32_0(d128f9e2665f14c03b3c551434e0d1f6efb43261b6d802db9441b2d15ba6a17d): error adding pod openshift-multus_multus-admission-controller-8d675b596-d787l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d128f9e2665f14c03b3c551434e0d1f6efb43261b6d802db9441b2d15ba6a17d" Netns:"/var/run/netns/28700118-4832-4a80-a2a9-2ed92512c028" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-8d675b596-d787l;K8S_POD_INFRA_CONTAINER_ID=d128f9e2665f14c03b3c551434e0d1f6efb43261b6d802db9441b2d15ba6a17d;K8S_POD_UID=95339220-324d-45e7-bdc2-e4f42fbd1d32" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-8d675b596-d787l] networking: Multus: [openshift-multus/multus-admission-controller-8d675b596-d787l/95339220-324d-45e7-bdc2-e4f42fbd1d32]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-8d675b596-d787l in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-8d675b596-d787l in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-8d675b596-d787l?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.110776 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.110776 master-0 kubenswrapper[7271]: > Mar 13 10:37:52.110941 master-0 kubenswrapper[7271]: E0313 10:37:52.110793 7271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 10:37:52.110941 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-8d675b596-d787l_openshift-multus_95339220-324d-45e7-bdc2-e4f42fbd1d32_0(d128f9e2665f14c03b3c551434e0d1f6efb43261b6d802db9441b2d15ba6a17d): error adding pod openshift-multus_multus-admission-controller-8d675b596-d787l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d128f9e2665f14c03b3c551434e0d1f6efb43261b6d802db9441b2d15ba6a17d" Netns:"/var/run/netns/28700118-4832-4a80-a2a9-2ed92512c028" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-8d675b596-d787l;K8S_POD_INFRA_CONTAINER_ID=d128f9e2665f14c03b3c551434e0d1f6efb43261b6d802db9441b2d15ba6a17d;K8S_POD_UID=95339220-324d-45e7-bdc2-e4f42fbd1d32" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-8d675b596-d787l] networking: Multus: [openshift-multus/multus-admission-controller-8d675b596-d787l/95339220-324d-45e7-bdc2-e4f42fbd1d32]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-8d675b596-d787l in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-8d675b596-d787l in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-8d675b596-d787l?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.110941 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.110941 master-0 kubenswrapper[7271]: > pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:37:52.110941 master-0 kubenswrapper[7271]: E0313 10:37:52.110835 7271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 10:37:52.110941 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-8d675b596-d787l_openshift-multus_95339220-324d-45e7-bdc2-e4f42fbd1d32_0(d128f9e2665f14c03b3c551434e0d1f6efb43261b6d802db9441b2d15ba6a17d): error adding pod openshift-multus_multus-admission-controller-8d675b596-d787l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d128f9e2665f14c03b3c551434e0d1f6efb43261b6d802db9441b2d15ba6a17d" Netns:"/var/run/netns/28700118-4832-4a80-a2a9-2ed92512c028" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-8d675b596-d787l;K8S_POD_INFRA_CONTAINER_ID=d128f9e2665f14c03b3c551434e0d1f6efb43261b6d802db9441b2d15ba6a17d;K8S_POD_UID=95339220-324d-45e7-bdc2-e4f42fbd1d32" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-8d675b596-d787l] networking: Multus: [openshift-multus/multus-admission-controller-8d675b596-d787l/95339220-324d-45e7-bdc2-e4f42fbd1d32]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-8d675b596-d787l in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-8d675b596-d787l in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-8d675b596-d787l?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.110941 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.110941 master-0 kubenswrapper[7271]: > pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:37:52.110941 master-0 kubenswrapper[7271]: E0313 10:37:52.110887 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"multus-admission-controller-8d675b596-d787l_openshift-multus(95339220-324d-45e7-bdc2-e4f42fbd1d32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"multus-admission-controller-8d675b596-d787l_openshift-multus(95339220-324d-45e7-bdc2-e4f42fbd1d32)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-8d675b596-d787l_openshift-multus_95339220-324d-45e7-bdc2-e4f42fbd1d32_0(d128f9e2665f14c03b3c551434e0d1f6efb43261b6d802db9441b2d15ba6a17d): error adding pod openshift-multus_multus-admission-controller-8d675b596-d787l to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d128f9e2665f14c03b3c551434e0d1f6efb43261b6d802db9441b2d15ba6a17d\\\" Netns:\\\"/var/run/netns/28700118-4832-4a80-a2a9-2ed92512c028\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-8d675b596-d787l;K8S_POD_INFRA_CONTAINER_ID=d128f9e2665f14c03b3c551434e0d1f6efb43261b6d802db9441b2d15ba6a17d;K8S_POD_UID=95339220-324d-45e7-bdc2-e4f42fbd1d32\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-8d675b596-d787l] networking: Multus: [openshift-multus/multus-admission-controller-8d675b596-d787l/95339220-324d-45e7-bdc2-e4f42fbd1d32]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-8d675b596-d787l in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-8d675b596-d787l in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-8d675b596-d787l?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" podUID="95339220-324d-45e7-bdc2-e4f42fbd1d32" Mar 13 10:37:52.113844 master-0 kubenswrapper[7271]: E0313 10:37:52.113803 7271 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 10:37:52.113844 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-d64cfc9db-rsl2h_openshift-operator-lifecycle-manager_2afe3890-e844-4dd3-ba49-3ac9178549bf_0(852ad08ff9ec61972354a90873f764dae94c57b9bc9fa8c6cf6043cc6a157065): error adding pod openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-rsl2h to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"852ad08ff9ec61972354a90873f764dae94c57b9bc9fa8c6cf6043cc6a157065" Netns:"/var/run/netns/ba6aaa92-aab5-413c-bb4e-80b81c39ffe5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-d64cfc9db-rsl2h;K8S_POD_INFRA_CONTAINER_ID=852ad08ff9ec61972354a90873f764dae94c57b9bc9fa8c6cf6043cc6a157065;K8S_POD_UID=2afe3890-e844-4dd3-ba49-3ac9178549bf" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h/2afe3890-e844-4dd3-ba49-3ac9178549bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-d64cfc9db-rsl2h in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-d64cfc9db-rsl2h in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-d64cfc9db-rsl2h?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.113844 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.113844 master-0 kubenswrapper[7271]: > Mar 13 10:37:52.113844 master-0 kubenswrapper[7271]: E0313 10:37:52.113839 7271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 10:37:52.113844 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-d64cfc9db-rsl2h_openshift-operator-lifecycle-manager_2afe3890-e844-4dd3-ba49-3ac9178549bf_0(852ad08ff9ec61972354a90873f764dae94c57b9bc9fa8c6cf6043cc6a157065): error adding pod openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-rsl2h to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"852ad08ff9ec61972354a90873f764dae94c57b9bc9fa8c6cf6043cc6a157065" Netns:"/var/run/netns/ba6aaa92-aab5-413c-bb4e-80b81c39ffe5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-d64cfc9db-rsl2h;K8S_POD_INFRA_CONTAINER_ID=852ad08ff9ec61972354a90873f764dae94c57b9bc9fa8c6cf6043cc6a157065;K8S_POD_UID=2afe3890-e844-4dd3-ba49-3ac9178549bf" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h/2afe3890-e844-4dd3-ba49-3ac9178549bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-d64cfc9db-rsl2h in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-d64cfc9db-rsl2h in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-d64cfc9db-rsl2h?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.113844 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.113844 master-0 kubenswrapper[7271]: > pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:37:52.114121 master-0 kubenswrapper[7271]: E0313 10:37:52.113857 7271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 10:37:52.114121 master-0 kubenswrapper[7271]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-d64cfc9db-rsl2h_openshift-operator-lifecycle-manager_2afe3890-e844-4dd3-ba49-3ac9178549bf_0(852ad08ff9ec61972354a90873f764dae94c57b9bc9fa8c6cf6043cc6a157065): error adding pod openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-rsl2h to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"852ad08ff9ec61972354a90873f764dae94c57b9bc9fa8c6cf6043cc6a157065" Netns:"/var/run/netns/ba6aaa92-aab5-413c-bb4e-80b81c39ffe5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-d64cfc9db-rsl2h;K8S_POD_INFRA_CONTAINER_ID=852ad08ff9ec61972354a90873f764dae94c57b9bc9fa8c6cf6043cc6a157065;K8S_POD_UID=2afe3890-e844-4dd3-ba49-3ac9178549bf" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h/2afe3890-e844-4dd3-ba49-3ac9178549bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-d64cfc9db-rsl2h in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-d64cfc9db-rsl2h in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-d64cfc9db-rsl2h?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 10:37:52.114121 master-0 kubenswrapper[7271]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 10:37:52.114121 master-0 kubenswrapper[7271]: > pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:37:52.114121 master-0 kubenswrapper[7271]: E0313 10:37:52.113911 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"olm-operator-d64cfc9db-rsl2h_openshift-operator-lifecycle-manager(2afe3890-e844-4dd3-ba49-3ac9178549bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"olm-operator-d64cfc9db-rsl2h_openshift-operator-lifecycle-manager(2afe3890-e844-4dd3-ba49-3ac9178549bf)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-d64cfc9db-rsl2h_openshift-operator-lifecycle-manager_2afe3890-e844-4dd3-ba49-3ac9178549bf_0(852ad08ff9ec61972354a90873f764dae94c57b9bc9fa8c6cf6043cc6a157065): error adding pod openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-rsl2h to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"852ad08ff9ec61972354a90873f764dae94c57b9bc9fa8c6cf6043cc6a157065\\\" Netns:\\\"/var/run/netns/ba6aaa92-aab5-413c-bb4e-80b81c39ffe5\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-d64cfc9db-rsl2h;K8S_POD_INFRA_CONTAINER_ID=852ad08ff9ec61972354a90873f764dae94c57b9bc9fa8c6cf6043cc6a157065;K8S_POD_UID=2afe3890-e844-4dd3-ba49-3ac9178549bf\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h/2afe3890-e844-4dd3-ba49-3ac9178549bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-d64cfc9db-rsl2h in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-d64cfc9db-rsl2h in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-d64cfc9db-rsl2h?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" podUID="2afe3890-e844-4dd3-ba49-3ac9178549bf" Mar 13 10:37:52.206729 master-0 kubenswrapper[7271]: I0313 10:37:52.206560 7271 generic.go:334] "Generic (PLEG): container finished" podID="b8d40b37-0f3d-4531-9fa8-eda965d2337d" containerID="a242486632cda89db044ed9feff7bb156e404c15924daa0514297e6cfa246629" exitCode=0 Mar 13 10:37:52.442398 master-0 kubenswrapper[7271]: I0313 10:37:52.442267 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:37:52.442718 master-0 kubenswrapper[7271]: I0313 10:37:52.442400 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: I0313 10:37:52.504237 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: [-]etcd failed: reason withheld Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:37:52.504434 master-0 kubenswrapper[7271]: I0313 10:37:52.504332 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:37:53.216755 master-0 kubenswrapper[7271]: I0313 10:37:53.216493 7271 generic.go:334] "Generic (PLEG): container finished" podID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerID="f5cc508c8bba11aea5ee45f0185ba6b283bf13e245305fcd3727611ac4aa5998" exitCode=0 Mar 13 10:37:53.654229 master-0 kubenswrapper[7271]: E0313 10:37:53.654133 7271 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:37:53.654534 master-0 kubenswrapper[7271]: E0313 10:37:53.654387 7271 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.009s" Mar 13 10:37:53.654534 master-0 kubenswrapper[7271]: I0313 10:37:53.654420 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:37:53.654534 master-0 kubenswrapper[7271]: I0313 10:37:53.654495 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:37:53.654534 master-0 kubenswrapper[7271]: I0313 10:37:53.654506 7271 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:37:53.654534 master-0 kubenswrapper[7271]: I0313 10:37:53.654516 7271 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:37:53.656215 master-0 kubenswrapper[7271]: I0313 10:37:53.655924 7271 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"281e47a8ccfe9b7bd7d1fae86c8e235e63f17e9935336f3e6ad3bed18be23300"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 10:37:53.656215 master-0 kubenswrapper[7271]: I0313 10:37:53.656014 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://281e47a8ccfe9b7bd7d1fae86c8e235e63f17e9935336f3e6ad3bed18be23300" gracePeriod=30 Mar 13 10:37:53.657943 master-0 kubenswrapper[7271]: I0313 10:37:53.657856 7271 scope.go:117] "RemoveContainer" containerID="07efb32e685572e6b4d6844e3569402a8bdfbf11ae0829c85acd5de7788ca4d9" Mar 13 10:37:53.659093 master-0 kubenswrapper[7271]: I0313 10:37:53.659014 7271 scope.go:117] "RemoveContainer" containerID="f5cc508c8bba11aea5ee45f0185ba6b283bf13e245305fcd3727611ac4aa5998" Mar 13 10:37:53.663401 master-0 kubenswrapper[7271]: I0313 10:37:53.663268 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 10:37:54.225965 master-0 kubenswrapper[7271]: I0313 10:37:54.225774 7271 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="281e47a8ccfe9b7bd7d1fae86c8e235e63f17e9935336f3e6ad3bed18be23300" exitCode=2 Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: E0313 10:37:56.109119 7271 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: &Event{ObjectMeta:{apiserver-65bc99cdf7-7rjbr.189c6048a8f5dcbc openshift-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-apiserver,Name:apiserver-65bc99cdf7-7rjbr,UID:1d72d950-cfb4-4ed5-9ad6-f7266b937493,APIVersion:v1,ResourceVersion:6311,FieldPath:spec.containers{openshift-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: body: [+]ping ok Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: [-]etcd failed: reason withheld Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:36:58.465729724 +0000 UTC m=+72.992552114,LastTimestamp:2026-03-13 10:36:58.465729724 +0000 UTC m=+72.992552114,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 13 10:37:56.109374 master-0 kubenswrapper[7271]: > Mar 13 10:37:56.437018 master-0 kubenswrapper[7271]: I0313 10:37:56.436859 7271 patch_prober.go:28] interesting pod/controller-manager-6954c8766d-g8z48 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" start-of-body= Mar 13 10:37:56.437018 master-0 kubenswrapper[7271]: I0313 10:37:56.436936 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" Mar 13 10:37:58.091741 master-0 kubenswrapper[7271]: E0313 10:37:58.091646 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: I0313 10:38:01.509664 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: [-]etcd failed: reason withheld Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:38:01.509745 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:38:01.510713 master-0 kubenswrapper[7271]: I0313 10:38:01.509756 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:38:06.436235 master-0 kubenswrapper[7271]: I0313 10:38:06.436154 7271 patch_prober.go:28] interesting pod/controller-manager-6954c8766d-g8z48 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" start-of-body= Mar 13 10:38:06.436752 master-0 kubenswrapper[7271]: I0313 10:38:06.436226 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" Mar 13 10:38:08.293798 master-0 kubenswrapper[7271]: E0313 10:38:08.293723 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 13 10:38:08.337342 master-0 kubenswrapper[7271]: E0313 10:38:08.337195 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:37:58Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:37:58Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:37:58Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:37:58Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43\\\"],\\\"sizeBytes\\\":438654375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7\\\"],\\\"sizeBytes\\\":411585608},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7\\\"],\\\"sizeBytes\\\":407347126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3\\\"],\\\"sizeBytes\\\":396521759}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:38:09.035966 master-0 kubenswrapper[7271]: I0313 10:38:09.035886 7271 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-df8wr container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Mar 13 10:38:09.036250 master-0 kubenswrapper[7271]: I0313 10:38:09.035998 7271 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" podUID="574bf255-14b3-40af-b240-2d3abd5b86b8" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: I0313 10:38:10.515735 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: [-]etcd failed: reason withheld Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:38:10.515890 master-0 kubenswrapper[7271]: I0313 10:38:10.515827 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:38:16.437275 master-0 kubenswrapper[7271]: I0313 10:38:16.437131 7271 patch_prober.go:28] interesting pod/controller-manager-6954c8766d-g8z48 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" start-of-body= Mar 13 10:38:16.437275 master-0 kubenswrapper[7271]: I0313 10:38:16.437218 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" Mar 13 10:38:18.338956 master-0 kubenswrapper[7271]: E0313 10:38:18.338849 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:38:18.695031 master-0 kubenswrapper[7271]: E0313 10:38:18.694841 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: I0313 10:38:19.521796 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: [-]etcd failed: reason withheld Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:38:19.521862 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:38:19.523165 master-0 kubenswrapper[7271]: I0313 10:38:19.522773 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:38:26.437493 master-0 kubenswrapper[7271]: I0313 10:38:26.437402 7271 patch_prober.go:28] interesting pod/controller-manager-6954c8766d-g8z48 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" start-of-body= Mar 13 10:38:26.438413 master-0 kubenswrapper[7271]: I0313 10:38:26.437516 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.43:8443/healthz\": dial tcp 10.128.0.43:8443: connect: connection refused" Mar 13 10:38:27.666157 master-0 kubenswrapper[7271]: E0313 10:38:27.666056 7271 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 10:38:27.667235 master-0 kubenswrapper[7271]: E0313 10:38:27.666275 7271 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.012s" Mar 13 10:38:27.667235 master-0 kubenswrapper[7271]: I0313 10:38:27.666300 7271 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:38:27.667235 master-0 kubenswrapper[7271]: I0313 10:38:27.666988 7271 scope.go:117] "RemoveContainer" containerID="e071f5df1cf13730e7c3a2d7e673c1b7527862b8e1f69ed525efba676776f319" Mar 13 10:38:27.673331 master-0 kubenswrapper[7271]: I0313 10:38:27.673284 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 10:38:28.339400 master-0 kubenswrapper[7271]: E0313 10:38:28.339307 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:38:28.405067 master-0 kubenswrapper[7271]: I0313 10:38:28.404998 7271 generic.go:334] "Generic (PLEG): container finished" podID="ec3168fc-6c8f-4603-94e0-17b1ae22a802" containerID="1920e0c05ffebe7a0fab80b000aebd0c99a9626ca78c9c2b099c218c0c998378" exitCode=0 Mar 13 10:38:28.407425 master-0 kubenswrapper[7271]: I0313 10:38:28.407378 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6954c8766d-g8z48_6317b62a-46e2-4a45-9c29-cb04c40d4425/controller-manager/0.log" Mar 13 10:38:28.408818 master-0 kubenswrapper[7271]: I0313 10:38:28.408777 7271 generic.go:334] "Generic (PLEG): container finished" podID="37b2e803-302b-4650-b18f-d3d2dd703bd5" containerID="0726d914d99337ac6ae1fc3306b6380d27700c4e1ef052dd78af4add66671237" exitCode=0 Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: I0313 10:38:28.528396 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: [-]etcd failed: reason withheld Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:38:28.528493 master-0 kubenswrapper[7271]: I0313 10:38:28.528494 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:38:29.414957 master-0 kubenswrapper[7271]: I0313 10:38:29.414844 7271 generic.go:334] "Generic (PLEG): container finished" podID="5ed5e77b-948b-4d94-ac9f-440ee3c07e18" containerID="dacb5471d19718622299f0fa6f9e909a820c9329353d0e6ad130c4eb61cefa28" exitCode=0 Mar 13 10:38:29.495916 master-0 kubenswrapper[7271]: E0313 10:38:29.495806 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 13 10:38:30.112730 master-0 kubenswrapper[7271]: E0313 10:38:30.112534 7271 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{apiserver-65bc99cdf7-7rjbr.189c60449e52ac0a openshift-apiserver 6959 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-apiserver,Name:apiserver-65bc99cdf7-7rjbr,UID:1d72d950-cfb4-4ed5-9ad6-f7266b937493,APIVersion:v1,ResourceVersion:6311,FieldPath:spec.containers{openshift-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:36:41 +0000 UTC,LastTimestamp:2026-03-13 10:36:58.465804896 +0000 UTC m=+72.992627286,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:38:31.426542 master-0 kubenswrapper[7271]: I0313 10:38:31.426432 7271 generic.go:334] "Generic (PLEG): container finished" podID="574bf255-14b3-40af-b240-2d3abd5b86b8" containerID="a384e9c9352558c7493eb0f31fbfe7c7667c323e9cd28c07e6b3e552b94e372f" exitCode=0 Mar 13 10:38:33.442195 master-0 kubenswrapper[7271]: I0313 10:38:33.441998 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-nsg74_282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/openshift-controller-manager-operator/1.log" Mar 13 10:38:33.444291 master-0 kubenswrapper[7271]: I0313 10:38:33.444234 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-nsg74_282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/openshift-controller-manager-operator/0.log" Mar 13 10:38:33.444635 master-0 kubenswrapper[7271]: I0313 10:38:33.444533 7271 generic.go:334] "Generic (PLEG): container finished" podID="282bc9ff-1bc0-421b-9cd3-d88d7c5e5303" containerID="53a8fd339624b3a824ba77b1d93455581709099722d103fd93b0ffb255eebf03" exitCode=255 Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: I0313 10:38:37.536060 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: [-]etcd failed: reason withheld Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:38:37.536272 master-0 kubenswrapper[7271]: I0313 10:38:37.536219 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:38:38.339643 master-0 kubenswrapper[7271]: E0313 10:38:38.339559 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:38:41.097205 master-0 kubenswrapper[7271]: E0313 10:38:41.097138 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: I0313 10:38:41.215153 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: [+]etcd ok Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:38:41.215461 master-0 kubenswrapper[7271]: I0313 10:38:41.215254 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: I0313 10:38:41.219241 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: [+]etcd ok Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:38:41.219270 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:38:41.219941 master-0 kubenswrapper[7271]: I0313 10:38:41.219278 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:38:42.979475 master-0 kubenswrapper[7271]: E0313 10:38:42.979404 7271 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.313s" Mar 13 10:38:42.979475 master-0 kubenswrapper[7271]: I0313 10:38:42.979458 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:38:42.980099 master-0 kubenswrapper[7271]: I0313 10:38:42.979670 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:38:42.980099 master-0 kubenswrapper[7271]: I0313 10:38:42.979920 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:38:42.980099 master-0 kubenswrapper[7271]: I0313 10:38:42.979928 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:38:42.980099 master-0 kubenswrapper[7271]: I0313 10:38:42.979955 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:38:42.980099 master-0 kubenswrapper[7271]: I0313 10:38:42.979972 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:38:42.980304 master-0 kubenswrapper[7271]: I0313 10:38:42.980181 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:38:42.980559 master-0 kubenswrapper[7271]: I0313 10:38:42.980485 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:38:42.980559 master-0 kubenswrapper[7271]: I0313 10:38:42.980492 7271 scope.go:117] "RemoveContainer" containerID="2c461d42e265a3320bcaee208db9040eedffe39900d9e8aa36490e00a5c604c0" Mar 13 10:38:42.980559 master-0 kubenswrapper[7271]: I0313 10:38:42.980514 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:38:42.982264 master-0 kubenswrapper[7271]: I0313 10:38:42.981297 7271 scope.go:117] "RemoveContainer" containerID="53a8fd339624b3a824ba77b1d93455581709099722d103fd93b0ffb255eebf03" Mar 13 10:38:42.982264 master-0 kubenswrapper[7271]: I0313 10:38:42.981426 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:38:42.982264 master-0 kubenswrapper[7271]: I0313 10:38:42.981458 7271 scope.go:117] "RemoveContainer" containerID="dbff0a4ca77dfd3c5dce218a106dba837080cd80ee7f274b5ebceb8f682ccabd" Mar 13 10:38:42.982264 master-0 kubenswrapper[7271]: E0313 10:38:42.981498 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-8565d84698-nsg74_openshift-controller-manager-operator(282bc9ff-1bc0-421b-9cd3-d88d7c5e5303)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" podUID="282bc9ff-1bc0-421b-9cd3-d88d7c5e5303" Mar 13 10:38:42.982264 master-0 kubenswrapper[7271]: I0313 10:38:42.981632 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:38:42.982264 master-0 kubenswrapper[7271]: I0313 10:38:42.981936 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:38:42.982264 master-0 kubenswrapper[7271]: I0313 10:38:42.981970 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:38:42.982264 master-0 kubenswrapper[7271]: I0313 10:38:42.982105 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:38:42.982799 master-0 kubenswrapper[7271]: I0313 10:38:42.982534 7271 scope.go:117] "RemoveContainer" containerID="30ed7322c0091d1c760c898b8eeff7c2a46e380aac09f0741b2738a7131c9763" Mar 13 10:38:42.983315 master-0 kubenswrapper[7271]: I0313 10:38:42.983072 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:38:42.983315 master-0 kubenswrapper[7271]: I0313 10:38:42.983297 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:38:42.986607 master-0 kubenswrapper[7271]: I0313 10:38:42.983689 7271 scope.go:117] "RemoveContainer" containerID="5e2eaafddd132326dc9e3d7a39739553509b59eb3a4133fcdb22787eb5fde49c" Mar 13 10:38:42.986607 master-0 kubenswrapper[7271]: I0313 10:38:42.984804 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:38:42.986607 master-0 kubenswrapper[7271]: I0313 10:38:42.984910 7271 scope.go:117] "RemoveContainer" containerID="d2e7a9c17281b6d5f7f20fbe7b128af98dc009aec3115a4cb2ebd1a39090d634" Mar 13 10:38:42.986607 master-0 kubenswrapper[7271]: I0313 10:38:42.985030 7271 scope.go:117] "RemoveContainer" containerID="a242486632cda89db044ed9feff7bb156e404c15924daa0514297e6cfa246629" Mar 13 10:38:42.992880 master-0 kubenswrapper[7271]: I0313 10:38:42.992821 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 10:38:43.003020 master-0 kubenswrapper[7271]: I0313 10:38:43.002888 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" event={"ID":"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9","Type":"ContainerDied","Data":"5e2eaafddd132326dc9e3d7a39739553509b59eb3a4133fcdb22787eb5fde49c"} Mar 13 10:38:43.003020 master-0 kubenswrapper[7271]: I0313 10:38:43.002941 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" event={"ID":"6317b62a-46e2-4a45-9c29-cb04c40d4425","Type":"ContainerDied","Data":"e071f5df1cf13730e7c3a2d7e673c1b7527862b8e1f69ed525efba676776f319"} Mar 13 10:38:43.003020 master-0 kubenswrapper[7271]: I0313 10:38:43.002962 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" event={"ID":"1434c4a2-5c4d-478a-a16a-7d6a52ea3099","Type":"ContainerDied","Data":"07efb32e685572e6b4d6844e3569402a8bdfbf11ae0829c85acd5de7788ca4d9"} Mar 13 10:38:43.003259 master-0 kubenswrapper[7271]: I0313 10:38:43.003225 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448"} Mar 13 10:38:43.003482 master-0 kubenswrapper[7271]: I0313 10:38:43.003319 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"9e06733a-9c47-4bcf-a5e2-946db8e2714b","Type":"ContainerDied","Data":"c87d032f992ab15941d07ccbd459ecd39c5fd54e6df8b197a56c0bc747f7d534"} Mar 13 10:38:43.003482 master-0 kubenswrapper[7271]: I0313 10:38:43.003338 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"7baf3efc-04dc-4c17-9c2a-397ac022d281","Type":"ContainerDied","Data":"56c9b868392613f72b3a821d9f4fd3508fb4759378ef047d1a2286ea13733ed0"} Mar 13 10:38:43.003482 master-0 kubenswrapper[7271]: I0313 10:38:43.003351 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-9z8mk" event={"ID":"f87662b9-6ac6-44f3-8a16-ff858c2baa91","Type":"ContainerDied","Data":"d2e7a9c17281b6d5f7f20fbe7b128af98dc009aec3115a4cb2ebd1a39090d634"} Mar 13 10:38:43.003482 master-0 kubenswrapper[7271]: I0313 10:38:43.003363 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597"} Mar 13 10:38:43.003482 master-0 kubenswrapper[7271]: I0313 10:38:43.003372 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9"} Mar 13 10:38:43.003482 master-0 kubenswrapper[7271]: I0313 10:38:43.003380 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b"} Mar 13 10:38:43.003482 master-0 kubenswrapper[7271]: I0313 10:38:43.003389 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466"} Mar 13 10:38:43.003482 master-0 kubenswrapper[7271]: I0313 10:38:43.003398 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb"} Mar 13 10:38:43.003482 master-0 kubenswrapper[7271]: I0313 10:38:43.003410 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" event={"ID":"8f9db15a-8854-485b-9863-9cbe5dddd977","Type":"ContainerDied","Data":"30ed7322c0091d1c760c898b8eeff7c2a46e380aac09f0741b2738a7131c9763"} Mar 13 10:38:43.003482 master-0 kubenswrapper[7271]: I0313 10:38:43.003422 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" event={"ID":"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba","Type":"ContainerDied","Data":"2c461d42e265a3320bcaee208db9040eedffe39900d9e8aa36490e00a5c604c0"} Mar 13 10:38:43.003482 master-0 kubenswrapper[7271]: I0313 10:38:43.003434 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" event={"ID":"a1a998af-4fc0-4078-a6a0-93dde6c00508","Type":"ContainerDied","Data":"dbff0a4ca77dfd3c5dce218a106dba837080cd80ee7f274b5ebceb8f682ccabd"} Mar 13 10:38:43.003482 master-0 kubenswrapper[7271]: I0313 10:38:43.003446 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" event={"ID":"b8d40b37-0f3d-4531-9fa8-eda965d2337d","Type":"ContainerDied","Data":"a242486632cda89db044ed9feff7bb156e404c15924daa0514297e6cfa246629"} Mar 13 10:38:43.004103 master-0 kubenswrapper[7271]: I0313 10:38:43.003457 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" event={"ID":"6ed47c57-533f-43e4-88eb-07da29b4878f","Type":"ContainerDied","Data":"f5cc508c8bba11aea5ee45f0185ba6b283bf13e245305fcd3727611ac4aa5998"} Mar 13 10:38:43.004103 master-0 kubenswrapper[7271]: I0313 10:38:43.003998 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"281e47a8ccfe9b7bd7d1fae86c8e235e63f17e9935336f3e6ad3bed18be23300"} Mar 13 10:38:43.004103 master-0 kubenswrapper[7271]: I0313 10:38:43.004018 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"6b8cee904e554093314cde9eee6c22eacab9d7c222d3e342258752f9f92f0479"} Mar 13 10:38:43.004103 master-0 kubenswrapper[7271]: I0313 10:38:43.004031 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" event={"ID":"1434c4a2-5c4d-478a-a16a-7d6a52ea3099","Type":"ContainerStarted","Data":"cd940301b6045fcf3388088b051ec834a3261f017e1dcca1b8063296e4c0a2f1"} Mar 13 10:38:43.004103 master-0 kubenswrapper[7271]: I0313 10:38:43.004045 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" event={"ID":"6ed47c57-533f-43e4-88eb-07da29b4878f","Type":"ContainerStarted","Data":"080bec4d72d5bc2a5ff39e071b40e2b30bc6c479f34acbf3881af3489f75aaae"} Mar 13 10:38:43.004103 master-0 kubenswrapper[7271]: I0313 10:38:43.004058 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" event={"ID":"ec3168fc-6c8f-4603-94e0-17b1ae22a802","Type":"ContainerDied","Data":"1920e0c05ffebe7a0fab80b000aebd0c99a9626ca78c9c2b099c218c0c998378"} Mar 13 10:38:43.004710 master-0 kubenswrapper[7271]: I0313 10:38:43.004662 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" event={"ID":"6317b62a-46e2-4a45-9c29-cb04c40d4425","Type":"ContainerStarted","Data":"531f8d3aa930e35bb9ee67f1aa93559ea0aeef92bc7b549aec79dcf9206d8e53"} Mar 13 10:38:43.004710 master-0 kubenswrapper[7271]: I0313 10:38:43.004683 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" event={"ID":"37b2e803-302b-4650-b18f-d3d2dd703bd5","Type":"ContainerDied","Data":"0726d914d99337ac6ae1fc3306b6380d27700c4e1ef052dd78af4add66671237"} Mar 13 10:38:43.005952 master-0 kubenswrapper[7271]: I0313 10:38:43.005353 7271 scope.go:117] "RemoveContainer" containerID="b25246b87fe6711f1f7c66db1d40e94041f17222319c643c72a0f13f39f94ce3" Mar 13 10:38:43.005952 master-0 kubenswrapper[7271]: I0313 10:38:43.005656 7271 scope.go:117] "RemoveContainer" containerID="1920e0c05ffebe7a0fab80b000aebd0c99a9626ca78c9c2b099c218c0c998378" Mar 13 10:38:43.006855 master-0 kubenswrapper[7271]: I0313 10:38:43.006127 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" event={"ID":"5ed5e77b-948b-4d94-ac9f-440ee3c07e18","Type":"ContainerDied","Data":"dacb5471d19718622299f0fa6f9e909a820c9329353d0e6ad130c4eb61cefa28"} Mar 13 10:38:43.006855 master-0 kubenswrapper[7271]: I0313 10:38:43.006218 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" event={"ID":"574bf255-14b3-40af-b240-2d3abd5b86b8","Type":"ContainerDied","Data":"a384e9c9352558c7493eb0f31fbfe7c7667c323e9cd28c07e6b3e552b94e372f"} Mar 13 10:38:43.006855 master-0 kubenswrapper[7271]: I0313 10:38:43.006244 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" event={"ID":"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303","Type":"ContainerDied","Data":"53a8fd339624b3a824ba77b1d93455581709099722d103fd93b0ffb255eebf03"} Mar 13 10:38:43.007815 master-0 kubenswrapper[7271]: I0313 10:38:43.007018 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:38:43.008097 master-0 kubenswrapper[7271]: I0313 10:38:43.008057 7271 scope.go:117] "RemoveContainer" containerID="a384e9c9352558c7493eb0f31fbfe7c7667c323e9cd28c07e6b3e552b94e372f" Mar 13 10:38:43.008165 master-0 kubenswrapper[7271]: I0313 10:38:43.008103 7271 scope.go:117] "RemoveContainer" containerID="0726d914d99337ac6ae1fc3306b6380d27700c4e1ef052dd78af4add66671237" Mar 13 10:38:43.008235 master-0 kubenswrapper[7271]: I0313 10:38:43.008204 7271 scope.go:117] "RemoveContainer" containerID="dacb5471d19718622299f0fa6f9e909a820c9329353d0e6ad130c4eb61cefa28" Mar 13 10:38:43.008395 master-0 kubenswrapper[7271]: I0313 10:38:43.008310 7271 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-mvfgh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 10:38:43.008395 master-0 kubenswrapper[7271]: I0313 10:38:43.008344 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" podUID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 10:38:43.084954 master-0 kubenswrapper[7271]: I0313 10:38:43.080363 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 10:38:43.084954 master-0 kubenswrapper[7271]: I0313 10:38:43.080424 7271 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="08b74081-dd4e-4a48-bc46-db9f2ba53f35" Mar 13 10:38:43.156866 master-0 kubenswrapper[7271]: I0313 10:38:43.154541 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 10:38:43.156866 master-0 kubenswrapper[7271]: I0313 10:38:43.154603 7271 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="08b74081-dd4e-4a48-bc46-db9f2ba53f35" Mar 13 10:38:43.169545 master-0 kubenswrapper[7271]: I0313 10:38:43.169056 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z"] Mar 13 10:38:43.169545 master-0 kubenswrapper[7271]: I0313 10:38:43.169129 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-s6k7z"] Mar 13 10:38:43.179114 master-0 kubenswrapper[7271]: I0313 10:38:43.179017 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=118.178987458 podStartE2EDuration="1m58.178987458s" podCreationTimestamp="2026-03-13 10:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:38:43.172337026 +0000 UTC m=+177.699159416" watchObservedRunningTime="2026-03-13 10:38:43.178987458 +0000 UTC m=+177.705809848" Mar 13 10:38:43.193326 master-0 kubenswrapper[7271]: I0313 10:38:43.193288 7271 scope.go:117] "RemoveContainer" containerID="5c959a07b9cea59f8d22bac12b5ad0b337201cde45ef40482caaae6f05ee2a56" Mar 13 10:38:43.379500 master-0 kubenswrapper[7271]: I0313 10:38:43.377082 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-tzd9b"] Mar 13 10:38:43.419868 master-0 kubenswrapper[7271]: W0313 10:38:43.416904 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7667717b_fb74_456b_8615_16475cb69e98.slice/crio-571b031f77274fed6328f6e07d585dcfd8fd69050ed40ecc9a578fd8f3044381 WatchSource:0}: Error finding container 571b031f77274fed6328f6e07d585dcfd8fd69050ed40ecc9a578fd8f3044381: Status 404 returned error can't find the container with id 571b031f77274fed6328f6e07d585dcfd8fd69050ed40ecc9a578fd8f3044381 Mar 13 10:38:43.471528 master-0 kubenswrapper[7271]: I0313 10:38:43.469859 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:38:43.524403 master-0 kubenswrapper[7271]: I0313 10:38:43.523836 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h"] Mar 13 10:38:43.533960 master-0 kubenswrapper[7271]: I0313 10:38:43.533929 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-9z8mk_f87662b9-6ac6-44f3-8a16-ff858c2baa91/approver/0.log" Mar 13 10:38:43.581011 master-0 kubenswrapper[7271]: I0313 10:38:43.572293 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerStarted","Data":"571b031f77274fed6328f6e07d585dcfd8fd69050ed40ecc9a578fd8f3044381"} Mar 13 10:38:43.596912 master-0 kubenswrapper[7271]: I0313 10:38:43.590888 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-nsg74_282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/openshift-controller-manager-operator/1.log" Mar 13 10:38:43.596912 master-0 kubenswrapper[7271]: I0313 10:38:43.591429 7271 scope.go:117] "RemoveContainer" containerID="53a8fd339624b3a824ba77b1d93455581709099722d103fd93b0ffb255eebf03" Mar 13 10:38:43.610948 master-0 kubenswrapper[7271]: I0313 10:38:43.606806 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" event={"ID":"a1a998af-4fc0-4078-a6a0-93dde6c00508","Type":"ContainerStarted","Data":"b2d3650b18e8d4e9f38822804153cd7a45f1b0959bcb61f0ce6a90a1570211e0"} Mar 13 10:38:43.619085 master-0 kubenswrapper[7271]: I0313 10:38:43.619002 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:38:43.627018 master-0 kubenswrapper[7271]: I0313 10:38:43.626984 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:38:43.650064 master-0 kubenswrapper[7271]: I0313 10:38:43.649985 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl"] Mar 13 10:38:43.700245 master-0 kubenswrapper[7271]: I0313 10:38:43.700128 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aaf36b4-e556-4723-a624-aa2edc69c301" path="/var/lib/kubelet/pods/4aaf36b4-e556-4723-a624-aa2edc69c301/volumes" Mar 13 10:38:43.700886 master-0 kubenswrapper[7271]: I0313 10:38:43.700855 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz"] Mar 13 10:38:43.824918 master-0 kubenswrapper[7271]: I0313 10:38:43.822920 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:38:43.835618 master-0 kubenswrapper[7271]: I0313 10:38:43.833851 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-d787l"] Mar 13 10:38:43.856575 master-0 kubenswrapper[7271]: I0313 10:38:43.856526 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45"] Mar 13 10:38:43.871291 master-0 kubenswrapper[7271]: W0313 10:38:43.868930 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66f49a19_0e3b_4611_b8a6_5f5687fa20b6.slice/crio-34c705593dd577219134e52fa5f1f4ac1bf3a254e75ac17359d23f2432c84086 WatchSource:0}: Error finding container 34c705593dd577219134e52fa5f1f4ac1bf3a254e75ac17359d23f2432c84086: Status 404 returned error can't find the container with id 34c705593dd577219134e52fa5f1f4ac1bf3a254e75ac17359d23f2432c84086 Mar 13 10:38:43.879112 master-0 kubenswrapper[7271]: I0313 10:38:43.875928 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-85x6d"] Mar 13 10:38:43.929604 master-0 kubenswrapper[7271]: I0313 10:38:43.929468 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jz2lp"] Mar 13 10:38:44.113761 master-0 kubenswrapper[7271]: I0313 10:38:44.113713 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_9e06733a-9c47-4bcf-a5e2-946db8e2714b/installer/0.log" Mar 13 10:38:44.114357 master-0 kubenswrapper[7271]: I0313 10:38:44.113788 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 10:38:44.139521 master-0 kubenswrapper[7271]: I0313 10:38:44.139471 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_7baf3efc-04dc-4c17-9c2a-397ac022d281/installer/0.log" Mar 13 10:38:44.139631 master-0 kubenswrapper[7271]: I0313 10:38:44.139617 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 10:38:44.161155 master-0 kubenswrapper[7271]: I0313 10:38:44.161117 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7baf3efc-04dc-4c17-9c2a-397ac022d281-kube-api-access\") pod \"7baf3efc-04dc-4c17-9c2a-397ac022d281\" (UID: \"7baf3efc-04dc-4c17-9c2a-397ac022d281\") " Mar 13 10:38:44.161311 master-0 kubenswrapper[7271]: I0313 10:38:44.161294 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7baf3efc-04dc-4c17-9c2a-397ac022d281-kubelet-dir\") pod \"7baf3efc-04dc-4c17-9c2a-397ac022d281\" (UID: \"7baf3efc-04dc-4c17-9c2a-397ac022d281\") " Mar 13 10:38:44.161453 master-0 kubenswrapper[7271]: I0313 10:38:44.161418 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e06733a-9c47-4bcf-a5e2-946db8e2714b-kubelet-dir\") pod \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\" (UID: \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\") " Mar 13 10:38:44.161570 master-0 kubenswrapper[7271]: I0313 10:38:44.161554 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7baf3efc-04dc-4c17-9c2a-397ac022d281-var-lock\") pod \"7baf3efc-04dc-4c17-9c2a-397ac022d281\" (UID: \"7baf3efc-04dc-4c17-9c2a-397ac022d281\") " Mar 13 10:38:44.161700 master-0 kubenswrapper[7271]: I0313 10:38:44.161678 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e06733a-9c47-4bcf-a5e2-946db8e2714b-kube-api-access\") pod \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\" (UID: \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\") " Mar 13 10:38:44.161811 master-0 kubenswrapper[7271]: I0313 10:38:44.161795 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e06733a-9c47-4bcf-a5e2-946db8e2714b-var-lock\") pod \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\" (UID: \"9e06733a-9c47-4bcf-a5e2-946db8e2714b\") " Mar 13 10:38:44.162903 master-0 kubenswrapper[7271]: I0313 10:38:44.162070 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e06733a-9c47-4bcf-a5e2-946db8e2714b-var-lock" (OuterVolumeSpecName: "var-lock") pod "9e06733a-9c47-4bcf-a5e2-946db8e2714b" (UID: "9e06733a-9c47-4bcf-a5e2-946db8e2714b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:38:44.163093 master-0 kubenswrapper[7271]: I0313 10:38:44.162280 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e06733a-9c47-4bcf-a5e2-946db8e2714b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9e06733a-9c47-4bcf-a5e2-946db8e2714b" (UID: "9e06733a-9c47-4bcf-a5e2-946db8e2714b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:38:44.163174 master-0 kubenswrapper[7271]: I0313 10:38:44.162308 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7baf3efc-04dc-4c17-9c2a-397ac022d281-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7baf3efc-04dc-4c17-9c2a-397ac022d281" (UID: "7baf3efc-04dc-4c17-9c2a-397ac022d281"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:38:44.163271 master-0 kubenswrapper[7271]: I0313 10:38:44.162330 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7baf3efc-04dc-4c17-9c2a-397ac022d281-var-lock" (OuterVolumeSpecName: "var-lock") pod "7baf3efc-04dc-4c17-9c2a-397ac022d281" (UID: "7baf3efc-04dc-4c17-9c2a-397ac022d281"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:38:44.168843 master-0 kubenswrapper[7271]: I0313 10:38:44.168768 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7baf3efc-04dc-4c17-9c2a-397ac022d281-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7baf3efc-04dc-4c17-9c2a-397ac022d281" (UID: "7baf3efc-04dc-4c17-9c2a-397ac022d281"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:38:44.168843 master-0 kubenswrapper[7271]: I0313 10:38:44.168810 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e06733a-9c47-4bcf-a5e2-946db8e2714b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9e06733a-9c47-4bcf-a5e2-946db8e2714b" (UID: "9e06733a-9c47-4bcf-a5e2-946db8e2714b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:38:44.263719 master-0 kubenswrapper[7271]: I0313 10:38:44.263524 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7baf3efc-04dc-4c17-9c2a-397ac022d281-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:38:44.263719 master-0 kubenswrapper[7271]: I0313 10:38:44.263629 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7baf3efc-04dc-4c17-9c2a-397ac022d281-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:38:44.263719 master-0 kubenswrapper[7271]: I0313 10:38:44.263646 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e06733a-9c47-4bcf-a5e2-946db8e2714b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:38:44.263719 master-0 kubenswrapper[7271]: I0313 10:38:44.263659 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7baf3efc-04dc-4c17-9c2a-397ac022d281-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:38:44.263719 master-0 kubenswrapper[7271]: I0313 10:38:44.263672 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e06733a-9c47-4bcf-a5e2-946db8e2714b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:38:44.263719 master-0 kubenswrapper[7271]: I0313 10:38:44.263685 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e06733a-9c47-4bcf-a5e2-946db8e2714b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: I0313 10:38:44.464277 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: [+]etcd ok Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:38:44.464344 master-0 kubenswrapper[7271]: I0313 10:38:44.464348 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:38:44.628561 master-0 kubenswrapper[7271]: I0313 10:38:44.628443 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_7baf3efc-04dc-4c17-9c2a-397ac022d281/installer/0.log" Mar 13 10:38:44.628820 master-0 kubenswrapper[7271]: I0313 10:38:44.628573 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"7baf3efc-04dc-4c17-9c2a-397ac022d281","Type":"ContainerDied","Data":"13b9f88ad828dc6f0b9caaa000ec4304ee1cea2959cd111893dbdd54815ac13d"} Mar 13 10:38:44.628820 master-0 kubenswrapper[7271]: I0313 10:38:44.628626 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13b9f88ad828dc6f0b9caaa000ec4304ee1cea2959cd111893dbdd54815ac13d" Mar 13 10:38:44.628820 master-0 kubenswrapper[7271]: I0313 10:38:44.628711 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.638064 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" event={"ID":"66f49a19-0e3b-4611-b8a6-5f5687fa20b6","Type":"ContainerStarted","Data":"34c705593dd577219134e52fa5f1f4ac1bf3a254e75ac17359d23f2432c84086"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.640193 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" event={"ID":"37b2e803-302b-4650-b18f-d3d2dd703bd5","Type":"ContainerStarted","Data":"881405211eef76d473660b20a0d3c866e54acadcefe8c182ab1f5f97e108929c"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.642985 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-9z8mk_f87662b9-6ac6-44f3-8a16-ff858c2baa91/approver/0.log" Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.643554 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-9z8mk" event={"ID":"f87662b9-6ac6-44f3-8a16-ff858c2baa91","Type":"ContainerStarted","Data":"7e21ba1a4a052f4311590e81daf7c7043a43eea8119ade6c511b95ed35202221"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.647397 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-6vpl4_1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/network-operator/0.log" Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.647462 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" event={"ID":"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9","Type":"ContainerStarted","Data":"4d75e74c4df786ae928889ac54113d7b673c3ebf79a2a08a34f9fbe9b63c1453"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.649428 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jz2lp" event={"ID":"79bb87a4-8834-4c73-834e-356ccc1f7f9b","Type":"ContainerStarted","Data":"21ea23db5a94394fed39e6756a1919898e68c50238c79a5641bf3126f4447416"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.651636 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" event={"ID":"8f9db15a-8854-485b-9863-9cbe5dddd977","Type":"ContainerStarted","Data":"3d7f37aa994251928291249049a2be620c22f26b28c64911444e794ad1a679e5"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.653764 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" event={"ID":"ec3168fc-6c8f-4603-94e0-17b1ae22a802","Type":"ContainerStarted","Data":"294850f202234f4a9d138e028654f94bb9813203f7edf3397d10697e7a4b46a2"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.656868 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" event={"ID":"8a305f45-8689-45a8-8c8b-5954f2c863df","Type":"ContainerStarted","Data":"82fd541215028d49819c1c4e25952f952bcebf54af53e12127c90c1cd7ebb91c"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.656929 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" event={"ID":"8a305f45-8689-45a8-8c8b-5954f2c863df","Type":"ContainerStarted","Data":"cf9561f8a446435dd3e05b7973785f1768a9224b0e43a36e35e60c9ec1bc16a2"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.658568 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" event={"ID":"95339220-324d-45e7-bdc2-e4f42fbd1d32","Type":"ContainerStarted","Data":"64faab925d07fb80bd4ae56d2309ec92e60b31ddda32859daa4f5dfef61fdcc5"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.660348 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" event={"ID":"2afe3890-e844-4dd3-ba49-3ac9178549bf","Type":"ContainerStarted","Data":"2d8c2c573acc02ece57d91166be062a427bbc681f8936d54a20df38a4936dc09"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.662747 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_9e06733a-9c47-4bcf-a5e2-946db8e2714b/installer/0.log" Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.662869 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.662883 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"9e06733a-9c47-4bcf-a5e2-946db8e2714b","Type":"ContainerDied","Data":"8624adf36154fe1f7cdb5c9eb99ed2b301e80e18fd1f6d8154a250c1a73d647b"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.662917 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8624adf36154fe1f7cdb5c9eb99ed2b301e80e18fd1f6d8154a250c1a73d647b" Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.665614 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-nsg74_282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/openshift-controller-manager-operator/1.log" Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.665743 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" event={"ID":"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303","Type":"ContainerStarted","Data":"44045eb34dbce8a8d8c5bec28be559a0d562acea9909308b142b2b5b5860a229"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.668500 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" event={"ID":"5ed5e77b-948b-4d94-ac9f-440ee3c07e18","Type":"ContainerStarted","Data":"7f952b61d71e907b8ab35c403ca342055b58e2b44f1c8092061e8d04df9ac501"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.670205 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" event={"ID":"c455a959-d764-4b4f-a1e0-95c27495dd9d","Type":"ContainerStarted","Data":"6b0b21ce8c91e31c5d3fafde2dc1d7d9feb5cca70a9bf65bb781c974d266575e"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.671298 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" event={"ID":"4d5479f3-51ec-4b93-8188-21cdda44828d","Type":"ContainerStarted","Data":"d5d5f29010412c6336405d3c5516283cb7d7f5b2df47504d4448651a9a52ed98"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.678274 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" event={"ID":"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba","Type":"ContainerStarted","Data":"5cf7d401ea622e52729b46eea598afe245447756a5d119bc7987bfb6c5cfb794"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.684333 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" event={"ID":"b8d40b37-0f3d-4531-9fa8-eda965d2337d","Type":"ContainerStarted","Data":"928f705a6df1a237b298e2f772354a8814379ea930e2d466bbe222c0fc185584"} Mar 13 10:38:45.085234 master-0 kubenswrapper[7271]: I0313 10:38:44.687944 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" event={"ID":"574bf255-14b3-40af-b240-2d3abd5b86b8","Type":"ContainerStarted","Data":"5562479ec1e49b40c330a36ec4d9ac6d15b4428df0c9b17bcdf8d8cf48cf7a09"} Mar 13 10:38:46.155743 master-0 kubenswrapper[7271]: I0313 10:38:46.155675 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 13 10:38:46.731310 master-0 kubenswrapper[7271]: I0313 10:38:46.731236 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-bg6zf_b10584c2-ef04-4649-bcb6-9222c9530c3f/manager/0.log" Mar 13 10:38:46.731929 master-0 kubenswrapper[7271]: I0313 10:38:46.731875 7271 generic.go:334] "Generic (PLEG): container finished" podID="b10584c2-ef04-4649-bcb6-9222c9530c3f" containerID="f661d164e1cae288da9b5b814f572be1703c2513d35aac45b2b22784229191e4" exitCode=1 Mar 13 10:38:46.732170 master-0 kubenswrapper[7271]: I0313 10:38:46.731939 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" event={"ID":"b10584c2-ef04-4649-bcb6-9222c9530c3f","Type":"ContainerDied","Data":"f661d164e1cae288da9b5b814f572be1703c2513d35aac45b2b22784229191e4"} Mar 13 10:38:46.733344 master-0 kubenswrapper[7271]: I0313 10:38:46.733310 7271 scope.go:117] "RemoveContainer" containerID="f661d164e1cae288da9b5b814f572be1703c2513d35aac45b2b22784229191e4" Mar 13 10:38:46.853376 master-0 kubenswrapper[7271]: I0313 10:38:46.853324 7271 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:38:46.853724 master-0 kubenswrapper[7271]: I0313 10:38:46.853679 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:38:47.468394 master-0 kubenswrapper[7271]: I0313 10:38:47.468298 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:38:47.472265 master-0 kubenswrapper[7271]: I0313 10:38:47.472194 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:38:47.681245 master-0 kubenswrapper[7271]: I0313 10:38:47.681167 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 10:38:47.740272 master-0 kubenswrapper[7271]: I0313 10:38:47.740192 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-f46qd_257a4a8b-014c-4473-80a0-e95cf6d41bf1/manager/0.log" Mar 13 10:38:47.740910 master-0 kubenswrapper[7271]: I0313 10:38:47.740846 7271 generic.go:334] "Generic (PLEG): container finished" podID="257a4a8b-014c-4473-80a0-e95cf6d41bf1" containerID="5f05908e71448e64ca18d1219369017d904e020901e65c57a4853144db037d28" exitCode=1 Mar 13 10:38:47.740977 master-0 kubenswrapper[7271]: I0313 10:38:47.740925 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" event={"ID":"257a4a8b-014c-4473-80a0-e95cf6d41bf1","Type":"ContainerDied","Data":"5f05908e71448e64ca18d1219369017d904e020901e65c57a4853144db037d28"} Mar 13 10:38:47.742127 master-0 kubenswrapper[7271]: I0313 10:38:47.742094 7271 scope.go:117] "RemoveContainer" containerID="5f05908e71448e64ca18d1219369017d904e020901e65c57a4853144db037d28" Mar 13 10:38:47.746692 master-0 kubenswrapper[7271]: I0313 10:38:47.746668 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:38:47.753998 master-0 kubenswrapper[7271]: E0313 10:38:47.753972 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 13 10:38:47.792861 master-0 kubenswrapper[7271]: I0313 10:38:47.792760 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.792734741 podStartE2EDuration="792.734741ms" podCreationTimestamp="2026-03-13 10:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:38:47.789158952 +0000 UTC m=+182.315981362" watchObservedRunningTime="2026-03-13 10:38:47.792734741 +0000 UTC m=+182.319557121" Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: I0313 10:38:49.464726 7271 patch_prober.go:28] interesting pod/apiserver-65bc99cdf7-7rjbr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: [+]log ok Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: [+]etcd ok Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: [+]poststarthook/max-in-flight-filter ok Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-startinformers ok Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: livez check failed Mar 13 10:38:49.465331 master-0 kubenswrapper[7271]: I0313 10:38:49.464886 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" podUID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:38:50.766328 master-0 kubenswrapper[7271]: I0313 10:38:50.765460 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-bg6zf_b10584c2-ef04-4649-bcb6-9222c9530c3f/manager/0.log" Mar 13 10:38:50.766328 master-0 kubenswrapper[7271]: I0313 10:38:50.765597 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" event={"ID":"b10584c2-ef04-4649-bcb6-9222c9530c3f","Type":"ContainerStarted","Data":"2a9aaa81e2cc4ad44480999dff8ac1b2c80678408fd67b6fb365310487f92570"} Mar 13 10:38:50.766328 master-0 kubenswrapper[7271]: I0313 10:38:50.766234 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:38:50.769305 master-0 kubenswrapper[7271]: I0313 10:38:50.768711 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerStarted","Data":"6a16ce55b44dfc39848323a97883033429768f52190f07a86f2d6e8605fcf149"} Mar 13 10:38:50.769305 master-0 kubenswrapper[7271]: I0313 10:38:50.768752 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerStarted","Data":"8931f468146aea32eb1151d08ef9573b7c8bddcc57495ce9f6bd5b790621abc0"} Mar 13 10:38:50.778074 master-0 kubenswrapper[7271]: I0313 10:38:50.778002 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" event={"ID":"4d5479f3-51ec-4b93-8188-21cdda44828d","Type":"ContainerStarted","Data":"f53a817dc8ce4fd2101aed7eca741c5f7906566b18cfcd92e2add08539bd45db"} Mar 13 10:38:50.784635 master-0 kubenswrapper[7271]: I0313 10:38:50.783163 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" event={"ID":"66f49a19-0e3b-4611-b8a6-5f5687fa20b6","Type":"ContainerStarted","Data":"2b215655327c77c15b5c8c962ef77f234a333c87823e067c5e476916a7abcdf5"} Mar 13 10:38:50.784635 master-0 kubenswrapper[7271]: I0313 10:38:50.783301 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:38:50.785011 master-0 kubenswrapper[7271]: I0313 10:38:50.784966 7271 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-85x6d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.18:8080/healthz\": dial tcp 10.128.0.18:8080: connect: connection refused" start-of-body= Mar 13 10:38:50.785066 master-0 kubenswrapper[7271]: I0313 10:38:50.785028 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" podUID="66f49a19-0e3b-4611-b8a6-5f5687fa20b6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.18:8080/healthz\": dial tcp 10.128.0.18:8080: connect: connection refused" Mar 13 10:38:50.799816 master-0 kubenswrapper[7271]: I0313 10:38:50.799763 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" event={"ID":"95339220-324d-45e7-bdc2-e4f42fbd1d32","Type":"ContainerStarted","Data":"b8b86d02f4f86b49f256fe88515a474a9fb718a6bd218f138f4504fc8b7c89fc"} Mar 13 10:38:50.800088 master-0 kubenswrapper[7271]: I0313 10:38:50.800069 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" event={"ID":"95339220-324d-45e7-bdc2-e4f42fbd1d32","Type":"ContainerStarted","Data":"6ac08771019787a7c11813b1fc15b8b6c6e6e35ed0a49a438a259a987603471f"} Mar 13 10:38:50.802826 master-0 kubenswrapper[7271]: I0313 10:38:50.802772 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_feb7b798-15b5-4004-87d0-96ce9381cdbe/installer/0.log" Mar 13 10:38:50.802921 master-0 kubenswrapper[7271]: I0313 10:38:50.802851 7271 generic.go:334] "Generic (PLEG): container finished" podID="feb7b798-15b5-4004-87d0-96ce9381cdbe" containerID="28aad4d86302888f158c61e3738904f7d878550af4392e7ed53add211247a0cd" exitCode=1 Mar 13 10:38:50.802984 master-0 kubenswrapper[7271]: I0313 10:38:50.802965 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"feb7b798-15b5-4004-87d0-96ce9381cdbe","Type":"ContainerDied","Data":"28aad4d86302888f158c61e3738904f7d878550af4392e7ed53add211247a0cd"} Mar 13 10:38:50.809189 master-0 kubenswrapper[7271]: I0313 10:38:50.809136 7271 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-85x6d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.18:8080/healthz\": dial tcp 10.128.0.18:8080: connect: connection refused" start-of-body= Mar 13 10:38:50.809370 master-0 kubenswrapper[7271]: I0313 10:38:50.809194 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" podUID="66f49a19-0e3b-4611-b8a6-5f5687fa20b6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.18:8080/healthz\": dial tcp 10.128.0.18:8080: connect: connection refused" Mar 13 10:38:50.809819 master-0 kubenswrapper[7271]: I0313 10:38:50.809781 7271 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-85x6d container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.18:8080/healthz\": dial tcp 10.128.0.18:8080: connect: connection refused" start-of-body= Mar 13 10:38:50.809966 master-0 kubenswrapper[7271]: I0313 10:38:50.809930 7271 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" podUID="66f49a19-0e3b-4611-b8a6-5f5687fa20b6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.18:8080/healthz\": dial tcp 10.128.0.18:8080: connect: connection refused" Mar 13 10:38:50.816308 master-0 kubenswrapper[7271]: I0313 10:38:50.816266 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-f46qd_257a4a8b-014c-4473-80a0-e95cf6d41bf1/manager/0.log" Mar 13 10:38:50.817773 master-0 kubenswrapper[7271]: I0313 10:38:50.817729 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" event={"ID":"257a4a8b-014c-4473-80a0-e95cf6d41bf1","Type":"ContainerStarted","Data":"505693401e0336c91ab91119b9f53889693ae2d79a1c0a657057ebc4d2c80fa9"} Mar 13 10:38:50.818548 master-0 kubenswrapper[7271]: I0313 10:38:50.818518 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:38:50.830381 master-0 kubenswrapper[7271]: I0313 10:38:50.830329 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jz2lp" event={"ID":"79bb87a4-8834-4c73-834e-356ccc1f7f9b","Type":"ContainerStarted","Data":"6ef25c015356560cfb9f91b72495486f2d7ac2659e89c816f22cab02e066905b"} Mar 13 10:38:50.830381 master-0 kubenswrapper[7271]: I0313 10:38:50.830383 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jz2lp" event={"ID":"79bb87a4-8834-4c73-834e-356ccc1f7f9b","Type":"ContainerStarted","Data":"b031f78e8232b77f29290c5e8a2f2e7d722b9c206b418d0c37ec0332a02612aa"} Mar 13 10:38:51.155815 master-0 kubenswrapper[7271]: I0313 10:38:51.155646 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 13 10:38:51.181272 master-0 kubenswrapper[7271]: I0313 10:38:51.181096 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 13 10:38:51.842885 master-0 kubenswrapper[7271]: I0313 10:38:51.842796 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:38:51.856330 master-0 kubenswrapper[7271]: I0313 10:38:51.856240 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 13 10:38:54.205866 master-0 kubenswrapper[7271]: I0313 10:38:54.205798 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_feb7b798-15b5-4004-87d0-96ce9381cdbe/installer/0.log" Mar 13 10:38:54.206577 master-0 kubenswrapper[7271]: I0313 10:38:54.205892 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 10:38:54.235106 master-0 kubenswrapper[7271]: I0313 10:38:54.235040 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/feb7b798-15b5-4004-87d0-96ce9381cdbe-var-lock\") pod \"feb7b798-15b5-4004-87d0-96ce9381cdbe\" (UID: \"feb7b798-15b5-4004-87d0-96ce9381cdbe\") " Mar 13 10:38:54.235397 master-0 kubenswrapper[7271]: I0313 10:38:54.235180 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/feb7b798-15b5-4004-87d0-96ce9381cdbe-kubelet-dir\") pod \"feb7b798-15b5-4004-87d0-96ce9381cdbe\" (UID: \"feb7b798-15b5-4004-87d0-96ce9381cdbe\") " Mar 13 10:38:54.235397 master-0 kubenswrapper[7271]: I0313 10:38:54.235326 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/feb7b798-15b5-4004-87d0-96ce9381cdbe-kube-api-access\") pod \"feb7b798-15b5-4004-87d0-96ce9381cdbe\" (UID: \"feb7b798-15b5-4004-87d0-96ce9381cdbe\") " Mar 13 10:38:54.235397 master-0 kubenswrapper[7271]: I0313 10:38:54.235360 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/feb7b798-15b5-4004-87d0-96ce9381cdbe-var-lock" (OuterVolumeSpecName: "var-lock") pod "feb7b798-15b5-4004-87d0-96ce9381cdbe" (UID: "feb7b798-15b5-4004-87d0-96ce9381cdbe"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:38:54.235521 master-0 kubenswrapper[7271]: I0313 10:38:54.235436 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/feb7b798-15b5-4004-87d0-96ce9381cdbe-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "feb7b798-15b5-4004-87d0-96ce9381cdbe" (UID: "feb7b798-15b5-4004-87d0-96ce9381cdbe"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:38:54.236148 master-0 kubenswrapper[7271]: I0313 10:38:54.235763 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/feb7b798-15b5-4004-87d0-96ce9381cdbe-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:38:54.236148 master-0 kubenswrapper[7271]: I0313 10:38:54.235793 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/feb7b798-15b5-4004-87d0-96ce9381cdbe-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:38:54.239740 master-0 kubenswrapper[7271]: I0313 10:38:54.239517 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb7b798-15b5-4004-87d0-96ce9381cdbe-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "feb7b798-15b5-4004-87d0-96ce9381cdbe" (UID: "feb7b798-15b5-4004-87d0-96ce9381cdbe"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:38:54.337777 master-0 kubenswrapper[7271]: I0313 10:38:54.337711 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/feb7b798-15b5-4004-87d0-96ce9381cdbe-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:38:54.466702 master-0 kubenswrapper[7271]: I0313 10:38:54.466656 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:38:54.476849 master-0 kubenswrapper[7271]: I0313 10:38:54.476811 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:38:54.868123 master-0 kubenswrapper[7271]: I0313 10:38:54.868072 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/0.log" Mar 13 10:38:54.868379 master-0 kubenswrapper[7271]: I0313 10:38:54.868153 7271 generic.go:334] "Generic (PLEG): container finished" podID="6622be09-206e-4d02-90ca-6d9f2fc852aa" containerID="426e576deb6604dde643ee98f5460b9f1475fda12e39205758c5b7f3ec56452f" exitCode=1 Mar 13 10:38:54.868379 master-0 kubenswrapper[7271]: I0313 10:38:54.868303 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" event={"ID":"6622be09-206e-4d02-90ca-6d9f2fc852aa","Type":"ContainerDied","Data":"426e576deb6604dde643ee98f5460b9f1475fda12e39205758c5b7f3ec56452f"} Mar 13 10:38:54.868946 master-0 kubenswrapper[7271]: I0313 10:38:54.868917 7271 scope.go:117] "RemoveContainer" containerID="426e576deb6604dde643ee98f5460b9f1475fda12e39205758c5b7f3ec56452f" Mar 13 10:38:54.875086 master-0 kubenswrapper[7271]: I0313 10:38:54.875037 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" event={"ID":"8a305f45-8689-45a8-8c8b-5954f2c863df","Type":"ContainerStarted","Data":"89274f7911bc25e38977ddb45d006b7195ff00ecbb96f23c5359ae00a584f176"} Mar 13 10:38:54.875172 master-0 kubenswrapper[7271]: I0313 10:38:54.875150 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:38:54.877557 master-0 kubenswrapper[7271]: I0313 10:38:54.877517 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" event={"ID":"c455a959-d764-4b4f-a1e0-95c27495dd9d","Type":"ContainerStarted","Data":"0a32801239b79a9c5702411b6eab3ee942b1e1b3815f9e18080d836a51090c5c"} Mar 13 10:38:54.881624 master-0 kubenswrapper[7271]: I0313 10:38:54.878566 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:38:54.885618 master-0 kubenswrapper[7271]: I0313 10:38:54.885349 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" event={"ID":"2afe3890-e844-4dd3-ba49-3ac9178549bf","Type":"ContainerStarted","Data":"8c1921cf43a9e974cfff4df90b36bdd24351b681efff106a059450a6a6a9dddd"} Mar 13 10:38:54.886905 master-0 kubenswrapper[7271]: I0313 10:38:54.886867 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:38:54.887131 master-0 kubenswrapper[7271]: I0313 10:38:54.887117 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:38:54.889634 master-0 kubenswrapper[7271]: I0313 10:38:54.889573 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_feb7b798-15b5-4004-87d0-96ce9381cdbe/installer/0.log" Mar 13 10:38:54.890519 master-0 kubenswrapper[7271]: I0313 10:38:54.890244 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 10:38:54.890519 master-0 kubenswrapper[7271]: I0313 10:38:54.890393 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"feb7b798-15b5-4004-87d0-96ce9381cdbe","Type":"ContainerDied","Data":"54845f97730049024e50483462ec2fdbbd2a3bf95c64c4c162c260a6e6834b4f"} Mar 13 10:38:54.890519 master-0 kubenswrapper[7271]: I0313 10:38:54.890419 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54845f97730049024e50483462ec2fdbbd2a3bf95c64c4c162c260a6e6834b4f" Mar 13 10:38:54.909616 master-0 kubenswrapper[7271]: I0313 10:38:54.907972 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:38:55.897994 master-0 kubenswrapper[7271]: I0313 10:38:55.897964 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/0.log" Mar 13 10:38:55.898694 master-0 kubenswrapper[7271]: I0313 10:38:55.898644 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" event={"ID":"6622be09-206e-4d02-90ca-6d9f2fc852aa","Type":"ContainerStarted","Data":"000574ac95c46dea00d94f10637b547931d5cf4cebc923f39d6577d129f9a2fa"} Mar 13 10:38:56.856790 master-0 kubenswrapper[7271]: I0313 10:38:56.856695 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:38:59.517837 master-0 kubenswrapper[7271]: I0313 10:38:59.517746 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bgvrc"] Mar 13 10:38:59.518403 master-0 kubenswrapper[7271]: E0313 10:38:59.518160 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb7b798-15b5-4004-87d0-96ce9381cdbe" containerName="installer" Mar 13 10:38:59.518403 master-0 kubenswrapper[7271]: I0313 10:38:59.518181 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb7b798-15b5-4004-87d0-96ce9381cdbe" containerName="installer" Mar 13 10:38:59.518403 master-0 kubenswrapper[7271]: E0313 10:38:59.518199 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aaf36b4-e556-4723-a624-aa2edc69c301" containerName="cluster-version-operator" Mar 13 10:38:59.518403 master-0 kubenswrapper[7271]: I0313 10:38:59.518209 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aaf36b4-e556-4723-a624-aa2edc69c301" containerName="cluster-version-operator" Mar 13 10:38:59.518403 master-0 kubenswrapper[7271]: E0313 10:38:59.518220 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00e8e251-40d9-458a-92a7-9b2e91dc7359" containerName="installer" Mar 13 10:38:59.518403 master-0 kubenswrapper[7271]: I0313 10:38:59.518229 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="00e8e251-40d9-458a-92a7-9b2e91dc7359" containerName="installer" Mar 13 10:38:59.518403 master-0 kubenswrapper[7271]: E0313 10:38:59.518244 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7baf3efc-04dc-4c17-9c2a-397ac022d281" containerName="installer" Mar 13 10:38:59.518403 master-0 kubenswrapper[7271]: I0313 10:38:59.518255 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="7baf3efc-04dc-4c17-9c2a-397ac022d281" containerName="installer" Mar 13 10:38:59.518403 master-0 kubenswrapper[7271]: E0313 10:38:59.518276 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e06733a-9c47-4bcf-a5e2-946db8e2714b" containerName="installer" Mar 13 10:38:59.518403 master-0 kubenswrapper[7271]: I0313 10:38:59.518291 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e06733a-9c47-4bcf-a5e2-946db8e2714b" containerName="installer" Mar 13 10:38:59.518403 master-0 kubenswrapper[7271]: I0313 10:38:59.518412 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb7b798-15b5-4004-87d0-96ce9381cdbe" containerName="installer" Mar 13 10:38:59.518827 master-0 kubenswrapper[7271]: I0313 10:38:59.518446 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="00e8e251-40d9-458a-92a7-9b2e91dc7359" containerName="installer" Mar 13 10:38:59.518827 master-0 kubenswrapper[7271]: I0313 10:38:59.518465 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="7baf3efc-04dc-4c17-9c2a-397ac022d281" containerName="installer" Mar 13 10:38:59.518827 master-0 kubenswrapper[7271]: I0313 10:38:59.518487 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e06733a-9c47-4bcf-a5e2-946db8e2714b" containerName="installer" Mar 13 10:38:59.518827 master-0 kubenswrapper[7271]: I0313 10:38:59.518501 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aaf36b4-e556-4723-a624-aa2edc69c301" containerName="cluster-version-operator" Mar 13 10:38:59.519747 master-0 kubenswrapper[7271]: I0313 10:38:59.519703 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mrztj"] Mar 13 10:38:59.519923 master-0 kubenswrapper[7271]: I0313 10:38:59.519724 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:38:59.521306 master-0 kubenswrapper[7271]: I0313 10:38:59.521288 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:38:59.521821 master-0 kubenswrapper[7271]: I0313 10:38:59.521752 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vr4ts"] Mar 13 10:38:59.522754 master-0 kubenswrapper[7271]: I0313 10:38:59.522730 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:38:59.524697 master-0 kubenswrapper[7271]: I0313 10:38:59.524680 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-ntlbj" Mar 13 10:38:59.525116 master-0 kubenswrapper[7271]: I0313 10:38:59.525102 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-xpxj2" Mar 13 10:38:59.525355 master-0 kubenswrapper[7271]: I0313 10:38:59.525342 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-bfgw8" Mar 13 10:38:59.531005 master-0 kubenswrapper[7271]: I0313 10:38:59.530970 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bgvrc"] Mar 13 10:38:59.534038 master-0 kubenswrapper[7271]: I0313 10:38:59.534021 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrztj"] Mar 13 10:38:59.542356 master-0 kubenswrapper[7271]: I0313 10:38:59.542296 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vr4ts"] Mar 13 10:38:59.605202 master-0 kubenswrapper[7271]: I0313 10:38:59.605157 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa507cf-017d-44f5-8662-77547f82fb51-utilities\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:38:59.605202 master-0 kubenswrapper[7271]: I0313 10:38:59.605210 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdb2x\" (UniqueName: \"kubernetes.io/projected/2a05e72d-836f-40e0-8a5c-ee02dce494b3-kube-api-access-qdb2x\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:38:59.605460 master-0 kubenswrapper[7271]: I0313 10:38:59.605247 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa507cf-017d-44f5-8662-77547f82fb51-catalog-content\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:38:59.605460 master-0 kubenswrapper[7271]: I0313 10:38:59.605266 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-catalog-content\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:38:59.605460 master-0 kubenswrapper[7271]: I0313 10:38:59.605293 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-utilities\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:38:59.605460 master-0 kubenswrapper[7271]: I0313 10:38:59.605313 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvvhh\" (UniqueName: \"kubernetes.io/projected/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-kube-api-access-rvvhh\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:38:59.605460 master-0 kubenswrapper[7271]: I0313 10:38:59.605333 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a05e72d-836f-40e0-8a5c-ee02dce494b3-catalog-content\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:38:59.605460 master-0 kubenswrapper[7271]: I0313 10:38:59.605351 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt6sd\" (UniqueName: \"kubernetes.io/projected/5aa507cf-017d-44f5-8662-77547f82fb51-kube-api-access-jt6sd\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:38:59.605460 master-0 kubenswrapper[7271]: I0313 10:38:59.605378 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a05e72d-836f-40e0-8a5c-ee02dce494b3-utilities\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:38:59.706382 master-0 kubenswrapper[7271]: I0313 10:38:59.706304 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa507cf-017d-44f5-8662-77547f82fb51-catalog-content\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:38:59.706382 master-0 kubenswrapper[7271]: I0313 10:38:59.706361 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-catalog-content\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:38:59.706382 master-0 kubenswrapper[7271]: I0313 10:38:59.706393 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-utilities\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:38:59.706795 master-0 kubenswrapper[7271]: I0313 10:38:59.706533 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvvhh\" (UniqueName: \"kubernetes.io/projected/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-kube-api-access-rvvhh\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:38:59.706795 master-0 kubenswrapper[7271]: I0313 10:38:59.706563 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a05e72d-836f-40e0-8a5c-ee02dce494b3-catalog-content\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:38:59.706795 master-0 kubenswrapper[7271]: I0313 10:38:59.706615 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt6sd\" (UniqueName: \"kubernetes.io/projected/5aa507cf-017d-44f5-8662-77547f82fb51-kube-api-access-jt6sd\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:38:59.706795 master-0 kubenswrapper[7271]: I0313 10:38:59.706645 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a05e72d-836f-40e0-8a5c-ee02dce494b3-utilities\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:38:59.706795 master-0 kubenswrapper[7271]: I0313 10:38:59.706663 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa507cf-017d-44f5-8662-77547f82fb51-utilities\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:38:59.706795 master-0 kubenswrapper[7271]: I0313 10:38:59.706682 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdb2x\" (UniqueName: \"kubernetes.io/projected/2a05e72d-836f-40e0-8a5c-ee02dce494b3-kube-api-access-qdb2x\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:38:59.707127 master-0 kubenswrapper[7271]: I0313 10:38:59.707077 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa507cf-017d-44f5-8662-77547f82fb51-catalog-content\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:38:59.707711 master-0 kubenswrapper[7271]: I0313 10:38:59.707479 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a05e72d-836f-40e0-8a5c-ee02dce494b3-catalog-content\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:38:59.707711 master-0 kubenswrapper[7271]: I0313 10:38:59.707693 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a05e72d-836f-40e0-8a5c-ee02dce494b3-utilities\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:38:59.707878 master-0 kubenswrapper[7271]: I0313 10:38:59.707845 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-utilities\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:38:59.707936 master-0 kubenswrapper[7271]: I0313 10:38:59.707922 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa507cf-017d-44f5-8662-77547f82fb51-utilities\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:38:59.708137 master-0 kubenswrapper[7271]: I0313 10:38:59.708103 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-catalog-content\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:39:00.135320 master-0 kubenswrapper[7271]: I0313 10:39:00.135254 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jdzpd"] Mar 13 10:39:00.144923 master-0 kubenswrapper[7271]: I0313 10:39:00.144829 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst"] Mar 13 10:39:00.145128 master-0 kubenswrapper[7271]: I0313 10:39:00.144984 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:39:00.145626 master-0 kubenswrapper[7271]: I0313 10:39:00.145603 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.147947 master-0 kubenswrapper[7271]: I0313 10:39:00.147906 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 10:39:00.148014 master-0 kubenswrapper[7271]: I0313 10:39:00.147992 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-zwgvd" Mar 13 10:39:00.148173 master-0 kubenswrapper[7271]: I0313 10:39:00.148147 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 10:39:00.149851 master-0 kubenswrapper[7271]: I0313 10:39:00.149813 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 10:39:00.213731 master-0 kubenswrapper[7271]: I0313 10:39:00.213645 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ac1a605-d2d5-4004-96f5-121c20555bde-service-ca\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.213731 master-0 kubenswrapper[7271]: I0313 10:39:00.213724 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0ac1a605-d2d5-4004-96f5-121c20555bde-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.213994 master-0 kubenswrapper[7271]: I0313 10:39:00.213773 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac1a605-d2d5-4004-96f5-121c20555bde-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.213994 master-0 kubenswrapper[7271]: I0313 10:39:00.213805 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beee81ef-5a3a-4df2-85d5-2573679d261f-utilities\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:39:00.213994 master-0 kubenswrapper[7271]: I0313 10:39:00.213832 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8q5s\" (UniqueName: \"kubernetes.io/projected/beee81ef-5a3a-4df2-85d5-2573679d261f-kube-api-access-f8q5s\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:39:00.213994 master-0 kubenswrapper[7271]: I0313 10:39:00.213875 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0ac1a605-d2d5-4004-96f5-121c20555bde-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.213994 master-0 kubenswrapper[7271]: I0313 10:39:00.213900 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac1a605-d2d5-4004-96f5-121c20555bde-serving-cert\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.213994 master-0 kubenswrapper[7271]: I0313 10:39:00.213939 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beee81ef-5a3a-4df2-85d5-2573679d261f-catalog-content\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:39:00.227953 master-0 kubenswrapper[7271]: I0313 10:39:00.227889 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdb2x\" (UniqueName: \"kubernetes.io/projected/2a05e72d-836f-40e0-8a5c-ee02dce494b3-kube-api-access-qdb2x\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:39:00.229185 master-0 kubenswrapper[7271]: I0313 10:39:00.229112 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt6sd\" (UniqueName: \"kubernetes.io/projected/5aa507cf-017d-44f5-8662-77547f82fb51-kube-api-access-jt6sd\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:39:00.230389 master-0 kubenswrapper[7271]: I0313 10:39:00.230322 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvvhh\" (UniqueName: \"kubernetes.io/projected/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-kube-api-access-rvvhh\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:39:00.315393 master-0 kubenswrapper[7271]: I0313 10:39:00.315305 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0ac1a605-d2d5-4004-96f5-121c20555bde-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.315393 master-0 kubenswrapper[7271]: I0313 10:39:00.315376 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac1a605-d2d5-4004-96f5-121c20555bde-serving-cert\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.315745 master-0 kubenswrapper[7271]: I0313 10:39:00.315691 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0ac1a605-d2d5-4004-96f5-121c20555bde-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.315745 master-0 kubenswrapper[7271]: I0313 10:39:00.315692 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beee81ef-5a3a-4df2-85d5-2573679d261f-catalog-content\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:39:00.315869 master-0 kubenswrapper[7271]: I0313 10:39:00.315831 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ac1a605-d2d5-4004-96f5-121c20555bde-service-ca\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.316021 master-0 kubenswrapper[7271]: I0313 10:39:00.315972 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0ac1a605-d2d5-4004-96f5-121c20555bde-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.316062 master-0 kubenswrapper[7271]: I0313 10:39:00.316031 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0ac1a605-d2d5-4004-96f5-121c20555bde-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.316142 master-0 kubenswrapper[7271]: I0313 10:39:00.316126 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac1a605-d2d5-4004-96f5-121c20555bde-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.316278 master-0 kubenswrapper[7271]: I0313 10:39:00.316250 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beee81ef-5a3a-4df2-85d5-2573679d261f-utilities\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:39:00.316316 master-0 kubenswrapper[7271]: I0313 10:39:00.316288 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8q5s\" (UniqueName: \"kubernetes.io/projected/beee81ef-5a3a-4df2-85d5-2573679d261f-kube-api-access-f8q5s\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:39:00.316393 master-0 kubenswrapper[7271]: I0313 10:39:00.316357 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beee81ef-5a3a-4df2-85d5-2573679d261f-catalog-content\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:39:00.316750 master-0 kubenswrapper[7271]: I0313 10:39:00.316722 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beee81ef-5a3a-4df2-85d5-2573679d261f-utilities\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:39:00.316800 master-0 kubenswrapper[7271]: I0313 10:39:00.316726 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ac1a605-d2d5-4004-96f5-121c20555bde-service-ca\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.319554 master-0 kubenswrapper[7271]: I0313 10:39:00.319502 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac1a605-d2d5-4004-96f5-121c20555bde-serving-cert\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:00.449060 master-0 kubenswrapper[7271]: I0313 10:39:00.448922 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:39:00.469650 master-0 kubenswrapper[7271]: I0313 10:39:00.469572 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:39:00.484750 master-0 kubenswrapper[7271]: I0313 10:39:00.484705 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:39:00.753197 master-0 kubenswrapper[7271]: I0313 10:39:00.753034 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jdzpd"] Mar 13 10:39:01.967954 master-0 kubenswrapper[7271]: I0313 10:39:01.967866 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrztj"] Mar 13 10:39:01.974153 master-0 kubenswrapper[7271]: I0313 10:39:01.969821 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bgvrc"] Mar 13 10:39:01.974153 master-0 kubenswrapper[7271]: I0313 10:39:01.970458 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac1a605-d2d5-4004-96f5-121c20555bde-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:01.974153 master-0 kubenswrapper[7271]: I0313 10:39:01.970536 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8q5s\" (UniqueName: \"kubernetes.io/projected/beee81ef-5a3a-4df2-85d5-2573679d261f-kube-api-access-f8q5s\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:39:01.992126 master-0 kubenswrapper[7271]: I0313 10:39:01.992081 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:39:02.012665 master-0 kubenswrapper[7271]: W0313 10:39:02.012620 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ac1a605_d2d5_4004_96f5_121c20555bde.slice/crio-c9ad49e483c47605d1fda52e7e670f8d0dd2ee6b7b1f41e2ebd66c5396af192f WatchSource:0}: Error finding container c9ad49e483c47605d1fda52e7e670f8d0dd2ee6b7b1f41e2ebd66c5396af192f: Status 404 returned error can't find the container with id c9ad49e483c47605d1fda52e7e670f8d0dd2ee6b7b1f41e2ebd66c5396af192f Mar 13 10:39:02.053018 master-0 kubenswrapper[7271]: I0313 10:39:02.052979 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:39:02.264607 master-0 kubenswrapper[7271]: I0313 10:39:02.264372 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:39:02.675017 master-0 kubenswrapper[7271]: I0313 10:39:02.674719 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jdzpd"] Mar 13 10:39:02.685146 master-0 kubenswrapper[7271]: I0313 10:39:02.685006 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vr4ts"] Mar 13 10:39:02.935198 master-0 kubenswrapper[7271]: I0313 10:39:02.935158 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" event={"ID":"0ac1a605-d2d5-4004-96f5-121c20555bde","Type":"ContainerStarted","Data":"9fa1a1f3dc431f4d1989376ade490c97b3ca19baaab0c502fea959b427739c54"} Mar 13 10:39:02.935198 master-0 kubenswrapper[7271]: I0313 10:39:02.935203 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" event={"ID":"0ac1a605-d2d5-4004-96f5-121c20555bde","Type":"ContainerStarted","Data":"c9ad49e483c47605d1fda52e7e670f8d0dd2ee6b7b1f41e2ebd66c5396af192f"} Mar 13 10:39:02.937707 master-0 kubenswrapper[7271]: I0313 10:39:02.937676 7271 generic.go:334] "Generic (PLEG): container finished" podID="5aa507cf-017d-44f5-8662-77547f82fb51" containerID="a18c04cbfe5a7abf5768c58054cd016d672f1f9f4ba2bd72d74624ba275dea07" exitCode=0 Mar 13 10:39:02.941763 master-0 kubenswrapper[7271]: I0313 10:39:02.940948 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr4ts" event={"ID":"5aa507cf-017d-44f5-8662-77547f82fb51","Type":"ContainerDied","Data":"a18c04cbfe5a7abf5768c58054cd016d672f1f9f4ba2bd72d74624ba275dea07"} Mar 13 10:39:02.941763 master-0 kubenswrapper[7271]: I0313 10:39:02.941057 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr4ts" event={"ID":"5aa507cf-017d-44f5-8662-77547f82fb51","Type":"ContainerStarted","Data":"8b939255ebac1f66f189aaed584b6e7c61496fc54de0eca1dee70e7efa443532"} Mar 13 10:39:02.952641 master-0 kubenswrapper[7271]: I0313 10:39:02.952048 7271 generic.go:334] "Generic (PLEG): container finished" podID="2a05e72d-836f-40e0-8a5c-ee02dce494b3" containerID="709ea323c21fae26ff2a6680d0329165925afd7a1343d424221a5d0bd6de0958" exitCode=0 Mar 13 10:39:02.952641 master-0 kubenswrapper[7271]: I0313 10:39:02.952397 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrztj" event={"ID":"2a05e72d-836f-40e0-8a5c-ee02dce494b3","Type":"ContainerDied","Data":"709ea323c21fae26ff2a6680d0329165925afd7a1343d424221a5d0bd6de0958"} Mar 13 10:39:02.952641 master-0 kubenswrapper[7271]: I0313 10:39:02.952457 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrztj" event={"ID":"2a05e72d-836f-40e0-8a5c-ee02dce494b3","Type":"ContainerStarted","Data":"d56ffe6fa9b01bb963c33e630f78eeefc536f0ea18493c909ad582b0bbe668a2"} Mar 13 10:39:02.956056 master-0 kubenswrapper[7271]: I0313 10:39:02.955986 7271 generic.go:334] "Generic (PLEG): container finished" podID="beee81ef-5a3a-4df2-85d5-2573679d261f" containerID="3d29df9026b8be32c69c5d366778bdae010d5195fd7cffbac836292c45f99342" exitCode=0 Mar 13 10:39:02.956107 master-0 kubenswrapper[7271]: I0313 10:39:02.956058 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdzpd" event={"ID":"beee81ef-5a3a-4df2-85d5-2573679d261f","Type":"ContainerDied","Data":"3d29df9026b8be32c69c5d366778bdae010d5195fd7cffbac836292c45f99342"} Mar 13 10:39:02.956107 master-0 kubenswrapper[7271]: I0313 10:39:02.956105 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdzpd" event={"ID":"beee81ef-5a3a-4df2-85d5-2573679d261f","Type":"ContainerStarted","Data":"164736e7418a21cde804e102fe3d184a2797171e5f4bf83a8bf76c7c9b72cc41"} Mar 13 10:39:02.958119 master-0 kubenswrapper[7271]: I0313 10:39:02.958093 7271 generic.go:334] "Generic (PLEG): container finished" podID="4a1b43c4-55b9-4c72-ba7c-9089bf28cf16" containerID="4eac15e946c14f20f0a00649c87e90500eec23139a51731688b3e55b52f0796d" exitCode=0 Mar 13 10:39:02.958208 master-0 kubenswrapper[7271]: I0313 10:39:02.958122 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgvrc" event={"ID":"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16","Type":"ContainerDied","Data":"4eac15e946c14f20f0a00649c87e90500eec23139a51731688b3e55b52f0796d"} Mar 13 10:39:02.958208 master-0 kubenswrapper[7271]: I0313 10:39:02.958148 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgvrc" event={"ID":"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16","Type":"ContainerStarted","Data":"d9ff345f3e6004990e637fa6bd4c1c17fad38322042b096639037cf7570053ac"} Mar 13 10:39:02.974877 master-0 kubenswrapper[7271]: I0313 10:39:02.974785 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" podStartSLOduration=5.974753145 podStartE2EDuration="5.974753145s" podCreationTimestamp="2026-03-13 10:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:39:02.965637028 +0000 UTC m=+197.492459418" watchObservedRunningTime="2026-03-13 10:39:02.974753145 +0000 UTC m=+197.501575535" Mar 13 10:39:16.055026 master-0 kubenswrapper[7271]: I0313 10:39:16.054962 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft"] Mar 13 10:39:16.056469 master-0 kubenswrapper[7271]: I0313 10:39:16.056440 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" Mar 13 10:39:16.058501 master-0 kubenswrapper[7271]: I0313 10:39:16.058153 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft"] Mar 13 10:39:16.058501 master-0 kubenswrapper[7271]: I0313 10:39:16.058176 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-4j4rp" Mar 13 10:39:16.058501 master-0 kubenswrapper[7271]: I0313 10:39:16.058222 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 10:39:16.058501 master-0 kubenswrapper[7271]: I0313 10:39:16.058367 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 10:39:16.058719 master-0 kubenswrapper[7271]: I0313 10:39:16.058623 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 10:39:16.189562 master-0 kubenswrapper[7271]: I0313 10:39:16.189402 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-hszft\" (UID: \"484e6d0b-d057-4658-8e49-bbe7e6f6ee86\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" Mar 13 10:39:16.189562 master-0 kubenswrapper[7271]: I0313 10:39:16.189556 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbdwm\" (UniqueName: \"kubernetes.io/projected/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-kube-api-access-qbdwm\") pod \"control-plane-machine-set-operator-6686554ddc-hszft\" (UID: \"484e6d0b-d057-4658-8e49-bbe7e6f6ee86\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" Mar 13 10:39:16.290534 master-0 kubenswrapper[7271]: I0313 10:39:16.290452 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbdwm\" (UniqueName: \"kubernetes.io/projected/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-kube-api-access-qbdwm\") pod \"control-plane-machine-set-operator-6686554ddc-hszft\" (UID: \"484e6d0b-d057-4658-8e49-bbe7e6f6ee86\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" Mar 13 10:39:16.290534 master-0 kubenswrapper[7271]: I0313 10:39:16.290521 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-hszft\" (UID: \"484e6d0b-d057-4658-8e49-bbe7e6f6ee86\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" Mar 13 10:39:16.293975 master-0 kubenswrapper[7271]: I0313 10:39:16.293936 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-hszft\" (UID: \"484e6d0b-d057-4658-8e49-bbe7e6f6ee86\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" Mar 13 10:39:16.307537 master-0 kubenswrapper[7271]: I0313 10:39:16.307461 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbdwm\" (UniqueName: \"kubernetes.io/projected/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-kube-api-access-qbdwm\") pod \"control-plane-machine-set-operator-6686554ddc-hszft\" (UID: \"484e6d0b-d057-4658-8e49-bbe7e6f6ee86\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" Mar 13 10:39:16.383566 master-0 kubenswrapper[7271]: I0313 10:39:16.382857 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" Mar 13 10:39:21.229090 master-0 kubenswrapper[7271]: I0313 10:39:21.228989 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q"] Mar 13 10:39:21.230283 master-0 kubenswrapper[7271]: I0313 10:39:21.230236 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:39:21.232539 master-0 kubenswrapper[7271]: I0313 10:39:21.232443 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 10:39:21.232727 master-0 kubenswrapper[7271]: I0313 10:39:21.232563 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 10:39:21.232727 master-0 kubenswrapper[7271]: I0313 10:39:21.232702 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-xdq92" Mar 13 10:39:21.232862 master-0 kubenswrapper[7271]: I0313 10:39:21.232744 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 10:39:21.236182 master-0 kubenswrapper[7271]: I0313 10:39:21.236124 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 10:39:21.236450 master-0 kubenswrapper[7271]: I0313 10:39:21.236261 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 10:39:21.354713 master-0 kubenswrapper[7271]: I0313 10:39:21.354627 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253196c1-ea2c-4382-b0fc-c56a8d919b9a-config\") pod \"machine-approver-955fcfb87-bkp8q\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:39:21.354713 master-0 kubenswrapper[7271]: I0313 10:39:21.354707 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/253196c1-ea2c-4382-b0fc-c56a8d919b9a-auth-proxy-config\") pod \"machine-approver-955fcfb87-bkp8q\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:39:21.355001 master-0 kubenswrapper[7271]: I0313 10:39:21.354847 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9l2d\" (UniqueName: \"kubernetes.io/projected/253196c1-ea2c-4382-b0fc-c56a8d919b9a-kube-api-access-h9l2d\") pod \"machine-approver-955fcfb87-bkp8q\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:39:21.355001 master-0 kubenswrapper[7271]: I0313 10:39:21.354982 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/253196c1-ea2c-4382-b0fc-c56a8d919b9a-machine-approver-tls\") pod \"machine-approver-955fcfb87-bkp8q\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:39:21.458682 master-0 kubenswrapper[7271]: I0313 10:39:21.457830 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253196c1-ea2c-4382-b0fc-c56a8d919b9a-config\") pod \"machine-approver-955fcfb87-bkp8q\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:39:21.458682 master-0 kubenswrapper[7271]: I0313 10:39:21.457914 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/253196c1-ea2c-4382-b0fc-c56a8d919b9a-auth-proxy-config\") pod \"machine-approver-955fcfb87-bkp8q\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:39:21.458682 master-0 kubenswrapper[7271]: I0313 10:39:21.457942 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9l2d\" (UniqueName: \"kubernetes.io/projected/253196c1-ea2c-4382-b0fc-c56a8d919b9a-kube-api-access-h9l2d\") pod \"machine-approver-955fcfb87-bkp8q\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:39:21.458682 master-0 kubenswrapper[7271]: I0313 10:39:21.457973 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/253196c1-ea2c-4382-b0fc-c56a8d919b9a-machine-approver-tls\") pod \"machine-approver-955fcfb87-bkp8q\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:39:21.461028 master-0 kubenswrapper[7271]: I0313 10:39:21.459700 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253196c1-ea2c-4382-b0fc-c56a8d919b9a-config\") pod \"machine-approver-955fcfb87-bkp8q\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:39:21.461859 master-0 kubenswrapper[7271]: I0313 10:39:21.461811 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/253196c1-ea2c-4382-b0fc-c56a8d919b9a-machine-approver-tls\") pod \"machine-approver-955fcfb87-bkp8q\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:39:21.476232 master-0 kubenswrapper[7271]: I0313 10:39:21.474877 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/253196c1-ea2c-4382-b0fc-c56a8d919b9a-auth-proxy-config\") pod \"machine-approver-955fcfb87-bkp8q\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:39:22.242749 master-0 kubenswrapper[7271]: I0313 10:39:22.240227 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9l2d\" (UniqueName: \"kubernetes.io/projected/253196c1-ea2c-4382-b0fc-c56a8d919b9a-kube-api-access-h9l2d\") pod \"machine-approver-955fcfb87-bkp8q\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:39:22.449994 master-0 kubenswrapper[7271]: I0313 10:39:22.449893 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:39:26.795630 master-0 kubenswrapper[7271]: I0313 10:39:26.792767 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8"] Mar 13 10:39:26.795630 master-0 kubenswrapper[7271]: I0313 10:39:26.793817 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:39:26.799629 master-0 kubenswrapper[7271]: I0313 10:39:26.798914 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-4gsfk" Mar 13 10:39:26.799629 master-0 kubenswrapper[7271]: I0313 10:39:26.799530 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 13 10:39:26.799867 master-0 kubenswrapper[7271]: I0313 10:39:26.799620 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 13 10:39:26.799867 master-0 kubenswrapper[7271]: I0313 10:39:26.799840 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 13 10:39:26.803614 master-0 kubenswrapper[7271]: I0313 10:39:26.799992 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 13 10:39:26.889468 master-0 kubenswrapper[7271]: I0313 10:39:26.889355 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8"] Mar 13 10:39:26.934566 master-0 kubenswrapper[7271]: I0313 10:39:26.934479 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:39:26.934936 master-0 kubenswrapper[7271]: I0313 10:39:26.934893 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn8w5\" (UniqueName: \"kubernetes.io/projected/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-kube-api-access-gn8w5\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:39:26.934996 master-0 kubenswrapper[7271]: I0313 10:39:26.934949 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:39:27.035955 master-0 kubenswrapper[7271]: I0313 10:39:27.035885 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:39:27.036201 master-0 kubenswrapper[7271]: I0313 10:39:27.035979 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn8w5\" (UniqueName: \"kubernetes.io/projected/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-kube-api-access-gn8w5\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:39:27.036201 master-0 kubenswrapper[7271]: I0313 10:39:27.036021 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:39:27.036699 master-0 kubenswrapper[7271]: I0313 10:39:27.036668 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:39:27.039246 master-0 kubenswrapper[7271]: I0313 10:39:27.039204 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:39:27.523549 master-0 kubenswrapper[7271]: I0313 10:39:27.523488 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn8w5\" (UniqueName: \"kubernetes.io/projected/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-kube-api-access-gn8w5\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:39:27.722119 master-0 kubenswrapper[7271]: I0313 10:39:27.722014 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:39:28.461006 master-0 kubenswrapper[7271]: I0313 10:39:28.457105 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm"] Mar 13 10:39:28.461006 master-0 kubenswrapper[7271]: I0313 10:39:28.458028 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" Mar 13 10:39:28.463204 master-0 kubenswrapper[7271]: I0313 10:39:28.463155 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 10:39:28.464157 master-0 kubenswrapper[7271]: I0313 10:39:28.463456 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 10:39:28.464157 master-0 kubenswrapper[7271]: I0313 10:39:28.463604 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-frjx4" Mar 13 10:39:28.464157 master-0 kubenswrapper[7271]: I0313 10:39:28.463727 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 10:39:28.632665 master-0 kubenswrapper[7271]: I0313 10:39:28.632607 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm"] Mar 13 10:39:28.637084 master-0 kubenswrapper[7271]: I0313 10:39:28.637033 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52"] Mar 13 10:39:28.638340 master-0 kubenswrapper[7271]: I0313 10:39:28.638255 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.644733 master-0 kubenswrapper[7271]: I0313 10:39:28.642786 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-q8ddz" Mar 13 10:39:28.644733 master-0 kubenswrapper[7271]: I0313 10:39:28.643832 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 13 10:39:28.651330 master-0 kubenswrapper[7271]: I0313 10:39:28.651267 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 13 10:39:28.651536 master-0 kubenswrapper[7271]: I0313 10:39:28.651340 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 13 10:39:28.651643 master-0 kubenswrapper[7271]: I0313 10:39:28.651577 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 13 10:39:28.665779 master-0 kubenswrapper[7271]: I0313 10:39:28.662060 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlmhn\" (UniqueName: \"kubernetes.io/projected/070b85a0-f076-4750-aa00-dabba401dc75-kube-api-access-nlmhn\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.665779 master-0 kubenswrapper[7271]: I0313 10:39:28.662130 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.665779 master-0 kubenswrapper[7271]: I0313 10:39:28.662202 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/86774fd7-7c26-4b41-badb-de1004397637-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mq7rm\" (UID: \"86774fd7-7c26-4b41-badb-de1004397637\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" Mar 13 10:39:28.665779 master-0 kubenswrapper[7271]: I0313 10:39:28.662292 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-config\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.665779 master-0 kubenswrapper[7271]: I0313 10:39:28.662349 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfxm5\" (UniqueName: \"kubernetes.io/projected/86774fd7-7c26-4b41-badb-de1004397637-kube-api-access-tfxm5\") pod \"cluster-samples-operator-664cb58b85-mq7rm\" (UID: \"86774fd7-7c26-4b41-badb-de1004397637\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" Mar 13 10:39:28.665779 master-0 kubenswrapper[7271]: I0313 10:39:28.662380 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-images\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.665779 master-0 kubenswrapper[7271]: I0313 10:39:28.662420 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.668725 master-0 kubenswrapper[7271]: I0313 10:39:28.666802 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52"] Mar 13 10:39:28.742969 master-0 kubenswrapper[7271]: I0313 10:39:28.742836 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd"] Mar 13 10:39:28.744362 master-0 kubenswrapper[7271]: I0313 10:39:28.744342 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:39:28.754618 master-0 kubenswrapper[7271]: I0313 10:39:28.752924 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-4mksc" Mar 13 10:39:28.754618 master-0 kubenswrapper[7271]: I0313 10:39:28.753189 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 13 10:39:28.754618 master-0 kubenswrapper[7271]: I0313 10:39:28.753381 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 13 10:39:28.769888 master-0 kubenswrapper[7271]: I0313 10:39:28.768349 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd"] Mar 13 10:39:28.770604 master-0 kubenswrapper[7271]: I0313 10:39:28.770552 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfxm5\" (UniqueName: \"kubernetes.io/projected/86774fd7-7c26-4b41-badb-de1004397637-kube-api-access-tfxm5\") pod \"cluster-samples-operator-664cb58b85-mq7rm\" (UID: \"86774fd7-7c26-4b41-badb-de1004397637\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" Mar 13 10:39:28.770695 master-0 kubenswrapper[7271]: I0313 10:39:28.770609 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-images\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.770695 master-0 kubenswrapper[7271]: I0313 10:39:28.770652 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.770695 master-0 kubenswrapper[7271]: I0313 10:39:28.770687 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlmhn\" (UniqueName: \"kubernetes.io/projected/070b85a0-f076-4750-aa00-dabba401dc75-kube-api-access-nlmhn\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.770821 master-0 kubenswrapper[7271]: I0313 10:39:28.770716 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.770940 master-0 kubenswrapper[7271]: I0313 10:39:28.770914 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-config\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.771001 master-0 kubenswrapper[7271]: I0313 10:39:28.770952 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/86774fd7-7c26-4b41-badb-de1004397637-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mq7rm\" (UID: \"86774fd7-7c26-4b41-badb-de1004397637\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" Mar 13 10:39:28.780619 master-0 kubenswrapper[7271]: I0313 10:39:28.774627 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-nhsd9"] Mar 13 10:39:28.798619 master-0 kubenswrapper[7271]: I0313 10:39:28.788852 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/86774fd7-7c26-4b41-badb-de1004397637-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mq7rm\" (UID: \"86774fd7-7c26-4b41-badb-de1004397637\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" Mar 13 10:39:28.798619 master-0 kubenswrapper[7271]: I0313 10:39:28.789438 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-config\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.798619 master-0 kubenswrapper[7271]: I0313 10:39:28.790542 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:28.798619 master-0 kubenswrapper[7271]: I0313 10:39:28.796144 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 13 10:39:28.798619 master-0 kubenswrapper[7271]: I0313 10:39:28.796305 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-t9jpj" Mar 13 10:39:28.798619 master-0 kubenswrapper[7271]: I0313 10:39:28.796433 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 13 10:39:28.798619 master-0 kubenswrapper[7271]: I0313 10:39:28.796580 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 13 10:39:28.798619 master-0 kubenswrapper[7271]: I0313 10:39:28.796775 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 13 10:39:28.802609 master-0 kubenswrapper[7271]: I0313 10:39:28.800068 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 13 10:39:28.802609 master-0 kubenswrapper[7271]: I0313 10:39:28.800815 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-images\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.815563 master-0 kubenswrapper[7271]: I0313 10:39:28.803099 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-nhsd9"] Mar 13 10:39:28.840928 master-0 kubenswrapper[7271]: I0313 10:39:28.838883 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.842428 master-0 kubenswrapper[7271]: I0313 10:39:28.842382 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfxm5\" (UniqueName: \"kubernetes.io/projected/86774fd7-7c26-4b41-badb-de1004397637-kube-api-access-tfxm5\") pod \"cluster-samples-operator-664cb58b85-mq7rm\" (UID: \"86774fd7-7c26-4b41-badb-de1004397637\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" Mar 13 10:39:28.842428 master-0 kubenswrapper[7271]: I0313 10:39:28.842416 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlmhn\" (UniqueName: \"kubernetes.io/projected/070b85a0-f076-4750-aa00-dabba401dc75-kube-api-access-nlmhn\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.850816 master-0 kubenswrapper[7271]: I0313 10:39:28.845322 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.881362 master-0 kubenswrapper[7271]: I0313 10:39:28.879900 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfzq\" (UniqueName: \"kubernetes.io/projected/c87545aa-11c2-4e6e-8c13-16eeff3be83b-kube-api-access-pwfzq\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:28.881362 master-0 kubenswrapper[7271]: I0313 10:39:28.879987 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:28.881362 master-0 kubenswrapper[7271]: I0313 10:39:28.880065 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c87545aa-11c2-4e6e-8c13-16eeff3be83b-snapshots\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:28.881362 master-0 kubenswrapper[7271]: I0313 10:39:28.880098 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt7hs\" (UniqueName: \"kubernetes.io/projected/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-kube-api-access-bt7hs\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:39:28.881362 master-0 kubenswrapper[7271]: I0313 10:39:28.880188 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:39:28.881362 master-0 kubenswrapper[7271]: I0313 10:39:28.880217 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-cert\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:39:28.881362 master-0 kubenswrapper[7271]: I0313 10:39:28.880233 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-service-ca-bundle\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:28.881362 master-0 kubenswrapper[7271]: I0313 10:39:28.880274 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c87545aa-11c2-4e6e-8c13-16eeff3be83b-serving-cert\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:28.924902 master-0 kubenswrapper[7271]: I0313 10:39:28.922119 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m"] Mar 13 10:39:28.924902 master-0 kubenswrapper[7271]: I0313 10:39:28.923630 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" Mar 13 10:39:28.945000 master-0 kubenswrapper[7271]: I0313 10:39:28.944314 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-jzkqp" Mar 13 10:39:28.945000 master-0 kubenswrapper[7271]: I0313 10:39:28.944352 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 13 10:39:28.953136 master-0 kubenswrapper[7271]: I0313 10:39:28.952304 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m"] Mar 13 10:39:28.981885 master-0 kubenswrapper[7271]: I0313 10:39:28.981845 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-service-ca-bundle\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:28.981960 master-0 kubenswrapper[7271]: I0313 10:39:28.981888 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-cert\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:39:28.981960 master-0 kubenswrapper[7271]: I0313 10:39:28.981914 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c87545aa-11c2-4e6e-8c13-16eeff3be83b-serving-cert\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:28.981960 master-0 kubenswrapper[7271]: I0313 10:39:28.981949 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwfzq\" (UniqueName: \"kubernetes.io/projected/c87545aa-11c2-4e6e-8c13-16eeff3be83b-kube-api-access-pwfzq\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:28.982050 master-0 kubenswrapper[7271]: I0313 10:39:28.981970 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:28.982050 master-0 kubenswrapper[7271]: I0313 10:39:28.981995 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c87545aa-11c2-4e6e-8c13-16eeff3be83b-snapshots\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:28.982050 master-0 kubenswrapper[7271]: I0313 10:39:28.982012 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt7hs\" (UniqueName: \"kubernetes.io/projected/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-kube-api-access-bt7hs\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:39:28.982050 master-0 kubenswrapper[7271]: I0313 10:39:28.982045 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:39:28.982765 master-0 kubenswrapper[7271]: I0313 10:39:28.982736 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:39:28.983039 master-0 kubenswrapper[7271]: I0313 10:39:28.983008 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-service-ca-bundle\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:28.983170 master-0 kubenswrapper[7271]: I0313 10:39:28.983145 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:28.983610 master-0 kubenswrapper[7271]: I0313 10:39:28.983561 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c87545aa-11c2-4e6e-8c13-16eeff3be83b-snapshots\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:28.985815 master-0 kubenswrapper[7271]: I0313 10:39:28.985770 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:39:28.990260 master-0 kubenswrapper[7271]: I0313 10:39:28.989776 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-cert\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:39:29.008069 master-0 kubenswrapper[7271]: I0313 10:39:29.008029 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c87545aa-11c2-4e6e-8c13-16eeff3be83b-serving-cert\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:29.010496 master-0 kubenswrapper[7271]: I0313 10:39:29.010462 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwfzq\" (UniqueName: \"kubernetes.io/projected/c87545aa-11c2-4e6e-8c13-16eeff3be83b-kube-api-access-pwfzq\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:29.015194 master-0 kubenswrapper[7271]: I0313 10:39:29.011550 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt7hs\" (UniqueName: \"kubernetes.io/projected/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-kube-api-access-bt7hs\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:39:29.083165 master-0 kubenswrapper[7271]: I0313 10:39:29.083121 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" Mar 13 10:39:29.083762 master-0 kubenswrapper[7271]: I0313 10:39:29.083711 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp847\" (UniqueName: \"kubernetes.io/projected/9da11462-a91d-4d02-8614-78b4c5b2f7e2-kube-api-access-hp847\") pod \"cluster-storage-operator-6fbfc8dc8f-fdt9m\" (UID: \"9da11462-a91d-4d02-8614-78b4c5b2f7e2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" Mar 13 10:39:29.083830 master-0 kubenswrapper[7271]: I0313 10:39:29.083810 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/9da11462-a91d-4d02-8614-78b4c5b2f7e2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-fdt9m\" (UID: \"9da11462-a91d-4d02-8614-78b4c5b2f7e2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" Mar 13 10:39:29.107740 master-0 kubenswrapper[7271]: I0313 10:39:29.104817 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7"] Mar 13 10:39:29.107740 master-0 kubenswrapper[7271]: I0313 10:39:29.107039 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.113883 master-0 kubenswrapper[7271]: I0313 10:39:29.111927 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 10:39:29.113883 master-0 kubenswrapper[7271]: I0313 10:39:29.113230 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 10:39:29.113883 master-0 kubenswrapper[7271]: I0313 10:39:29.113348 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 10:39:29.113883 master-0 kubenswrapper[7271]: I0313 10:39:29.113423 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-7mc4m" Mar 13 10:39:29.113883 master-0 kubenswrapper[7271]: I0313 10:39:29.113494 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 10:39:29.138671 master-0 kubenswrapper[7271]: I0313 10:39:29.132535 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 10:39:29.186913 master-0 kubenswrapper[7271]: I0313 10:39:29.183275 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:39:29.186913 master-0 kubenswrapper[7271]: I0313 10:39:29.186383 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/9da11462-a91d-4d02-8614-78b4c5b2f7e2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-fdt9m\" (UID: \"9da11462-a91d-4d02-8614-78b4c5b2f7e2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" Mar 13 10:39:29.186913 master-0 kubenswrapper[7271]: I0313 10:39:29.186525 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp847\" (UniqueName: \"kubernetes.io/projected/9da11462-a91d-4d02-8614-78b4c5b2f7e2-kube-api-access-hp847\") pod \"cluster-storage-operator-6fbfc8dc8f-fdt9m\" (UID: \"9da11462-a91d-4d02-8614-78b4c5b2f7e2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" Mar 13 10:39:29.196230 master-0 kubenswrapper[7271]: I0313 10:39:29.195604 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs"] Mar 13 10:39:29.198862 master-0 kubenswrapper[7271]: I0313 10:39:29.198190 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:39:29.203502 master-0 kubenswrapper[7271]: I0313 10:39:29.203454 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/9da11462-a91d-4d02-8614-78b4c5b2f7e2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-fdt9m\" (UID: \"9da11462-a91d-4d02-8614-78b4c5b2f7e2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" Mar 13 10:39:29.204507 master-0 kubenswrapper[7271]: I0313 10:39:29.204479 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 10:39:29.205807 master-0 kubenswrapper[7271]: I0313 10:39:29.204731 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 10:39:29.205807 master-0 kubenswrapper[7271]: I0313 10:39:29.204867 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 10:39:29.205807 master-0 kubenswrapper[7271]: I0313 10:39:29.205327 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 10:39:29.205807 master-0 kubenswrapper[7271]: I0313 10:39:29.205557 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 10:39:29.205807 master-0 kubenswrapper[7271]: I0313 10:39:29.205703 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-bk5cz" Mar 13 10:39:29.221690 master-0 kubenswrapper[7271]: I0313 10:39:29.220513 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp847\" (UniqueName: \"kubernetes.io/projected/9da11462-a91d-4d02-8614-78b4c5b2f7e2-kube-api-access-hp847\") pod \"cluster-storage-operator-6fbfc8dc8f-fdt9m\" (UID: \"9da11462-a91d-4d02-8614-78b4c5b2f7e2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" Mar 13 10:39:29.226622 master-0 kubenswrapper[7271]: I0313 10:39:29.226551 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs"] Mar 13 10:39:29.227058 master-0 kubenswrapper[7271]: I0313 10:39:29.226956 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:39:29.259500 master-0 kubenswrapper[7271]: I0313 10:39:29.259184 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8"] Mar 13 10:39:29.287784 master-0 kubenswrapper[7271]: I0313 10:39:29.287727 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f64995-fa1e-4205-981b-be7a7ae67115-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.288014 master-0 kubenswrapper[7271]: I0313 10:39:29.287802 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:39:29.288014 master-0 kubenswrapper[7271]: I0313 10:39:29.287837 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:39:29.288014 master-0 kubenswrapper[7271]: I0313 10:39:29.287865 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26dtr\" (UniqueName: \"kubernetes.io/projected/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-kube-api-access-26dtr\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:39:29.288014 master-0 kubenswrapper[7271]: I0313 10:39:29.287888 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e9f64995-fa1e-4205-981b-be7a7ae67115-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.288014 master-0 kubenswrapper[7271]: I0313 10:39:29.287909 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpwh9\" (UniqueName: \"kubernetes.io/projected/e9f64995-fa1e-4205-981b-be7a7ae67115-kube-api-access-vpwh9\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.288014 master-0 kubenswrapper[7271]: I0313 10:39:29.287930 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9f64995-fa1e-4205-981b-be7a7ae67115-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.288202 master-0 kubenswrapper[7271]: I0313 10:39:29.288032 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f64995-fa1e-4205-981b-be7a7ae67115-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.288259 master-0 kubenswrapper[7271]: I0313 10:39:29.288205 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-images\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:39:29.312607 master-0 kubenswrapper[7271]: I0313 10:39:29.312545 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" Mar 13 10:39:29.315424 master-0 kubenswrapper[7271]: I0313 10:39:29.315373 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft"] Mar 13 10:39:29.390047 master-0 kubenswrapper[7271]: I0313 10:39:29.389813 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-images\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:39:29.390492 master-0 kubenswrapper[7271]: I0313 10:39:29.390396 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f64995-fa1e-4205-981b-be7a7ae67115-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.391020 master-0 kubenswrapper[7271]: I0313 10:39:29.390927 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-images\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:39:29.391020 master-0 kubenswrapper[7271]: I0313 10:39:29.390940 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f64995-fa1e-4205-981b-be7a7ae67115-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.391146 master-0 kubenswrapper[7271]: I0313 10:39:29.391059 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:39:29.391233 master-0 kubenswrapper[7271]: I0313 10:39:29.391205 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:39:29.391494 master-0 kubenswrapper[7271]: I0313 10:39:29.391460 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26dtr\" (UniqueName: \"kubernetes.io/projected/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-kube-api-access-26dtr\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:39:29.391565 master-0 kubenswrapper[7271]: I0313 10:39:29.391509 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e9f64995-fa1e-4205-981b-be7a7ae67115-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.391565 master-0 kubenswrapper[7271]: I0313 10:39:29.391544 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpwh9\" (UniqueName: \"kubernetes.io/projected/e9f64995-fa1e-4205-981b-be7a7ae67115-kube-api-access-vpwh9\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.391696 master-0 kubenswrapper[7271]: I0313 10:39:29.391575 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9f64995-fa1e-4205-981b-be7a7ae67115-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.391696 master-0 kubenswrapper[7271]: I0313 10:39:29.391662 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9f64995-fa1e-4205-981b-be7a7ae67115-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.392009 master-0 kubenswrapper[7271]: I0313 10:39:29.391939 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f64995-fa1e-4205-981b-be7a7ae67115-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.392871 master-0 kubenswrapper[7271]: I0313 10:39:29.392823 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e9f64995-fa1e-4205-981b-be7a7ae67115-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.398767 master-0 kubenswrapper[7271]: I0313 10:39:29.398402 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f64995-fa1e-4205-981b-be7a7ae67115-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.399981 master-0 kubenswrapper[7271]: I0313 10:39:29.399912 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:39:29.401163 master-0 kubenswrapper[7271]: I0313 10:39:29.401127 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:39:29.422233 master-0 kubenswrapper[7271]: I0313 10:39:29.422179 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26dtr\" (UniqueName: \"kubernetes.io/projected/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-kube-api-access-26dtr\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:39:29.449663 master-0 kubenswrapper[7271]: I0313 10:39:29.442609 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpwh9\" (UniqueName: \"kubernetes.io/projected/e9f64995-fa1e-4205-981b-be7a7ae67115-kube-api-access-vpwh9\") pod \"cluster-cloud-controller-manager-operator-559568b945-r8ck7\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.459166 master-0 kubenswrapper[7271]: I0313 10:39:29.458458 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq"] Mar 13 10:39:29.470350 master-0 kubenswrapper[7271]: I0313 10:39:29.467565 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:29.470350 master-0 kubenswrapper[7271]: I0313 10:39:29.469424 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq"] Mar 13 10:39:29.470819 master-0 kubenswrapper[7271]: I0313 10:39:29.470464 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-chx8x" Mar 13 10:39:29.483378 master-0 kubenswrapper[7271]: I0313 10:39:29.475502 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 10:39:29.499056 master-0 kubenswrapper[7271]: I0313 10:39:29.495752 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:39:29.538317 master-0 kubenswrapper[7271]: I0313 10:39:29.536702 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:39:29.602483 master-0 kubenswrapper[7271]: I0313 10:39:29.602428 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-webhook-cert\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:29.602746 master-0 kubenswrapper[7271]: I0313 10:39:29.602497 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1edde4bf-4554-4ab2-b588-513ad84a9bae-tmpfs\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:29.602746 master-0 kubenswrapper[7271]: I0313 10:39:29.602531 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxkl8\" (UniqueName: \"kubernetes.io/projected/1edde4bf-4554-4ab2-b588-513ad84a9bae-kube-api-access-kxkl8\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:29.602746 master-0 kubenswrapper[7271]: I0313 10:39:29.602605 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-apiservice-cert\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:29.627695 master-0 kubenswrapper[7271]: W0313 10:39:29.623683 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9f64995_fa1e_4205_981b_be7a7ae67115.slice/crio-f034de0da3c46b265fa5f09cbe0e11e48a199aeaabc3bd2f6dde6e2b9b1b9898 WatchSource:0}: Error finding container f034de0da3c46b265fa5f09cbe0e11e48a199aeaabc3bd2f6dde6e2b9b1b9898: Status 404 returned error can't find the container with id f034de0da3c46b265fa5f09cbe0e11e48a199aeaabc3bd2f6dde6e2b9b1b9898 Mar 13 10:39:29.677082 master-0 kubenswrapper[7271]: I0313 10:39:29.677030 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdzpd" event={"ID":"beee81ef-5a3a-4df2-85d5-2573679d261f","Type":"ContainerStarted","Data":"5981b0f268f1a64d5e07b672a70671406c05cd6d7d9cce3115bdfd6054d046d6"} Mar 13 10:39:29.694489 master-0 kubenswrapper[7271]: I0313 10:39:29.692874 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz"] Mar 13 10:39:29.694489 master-0 kubenswrapper[7271]: I0313 10:39:29.693993 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:39:29.695460 master-0 kubenswrapper[7271]: I0313 10:39:29.695355 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52"] Mar 13 10:39:29.703778 master-0 kubenswrapper[7271]: I0313 10:39:29.703728 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 10:39:29.703959 master-0 kubenswrapper[7271]: I0313 10:39:29.703734 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 10:39:29.704161 master-0 kubenswrapper[7271]: I0313 10:39:29.704038 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-apiservice-cert\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:29.704229 master-0 kubenswrapper[7271]: I0313 10:39:29.704197 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-webhook-cert\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:29.704271 master-0 kubenswrapper[7271]: I0313 10:39:29.704246 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1edde4bf-4554-4ab2-b588-513ad84a9bae-tmpfs\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:29.704315 master-0 kubenswrapper[7271]: I0313 10:39:29.704284 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxkl8\" (UniqueName: \"kubernetes.io/projected/1edde4bf-4554-4ab2-b588-513ad84a9bae-kube-api-access-kxkl8\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:29.704401 master-0 kubenswrapper[7271]: I0313 10:39:29.704381 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-fwh6p" Mar 13 10:39:29.704641 master-0 kubenswrapper[7271]: I0313 10:39:29.704531 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 10:39:29.705948 master-0 kubenswrapper[7271]: I0313 10:39:29.705896 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1edde4bf-4554-4ab2-b588-513ad84a9bae-tmpfs\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:29.711481 master-0 kubenswrapper[7271]: I0313 10:39:29.710725 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-apiservice-cert\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:29.712916 master-0 kubenswrapper[7271]: I0313 10:39:29.712750 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-webhook-cert\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:29.714339 master-0 kubenswrapper[7271]: I0313 10:39:29.714028 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" event={"ID":"e9f64995-fa1e-4205-981b-be7a7ae67115","Type":"ContainerStarted","Data":"f034de0da3c46b265fa5f09cbe0e11e48a199aeaabc3bd2f6dde6e2b9b1b9898"} Mar 13 10:39:29.715267 master-0 kubenswrapper[7271]: I0313 10:39:29.715230 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz"] Mar 13 10:39:29.727613 master-0 kubenswrapper[7271]: I0313 10:39:29.726527 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgvrc" event={"ID":"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16","Type":"ContainerStarted","Data":"a89f2be4905476a4b6dbbd07f3ca4359a228444679e496e247030ce754dfdd31"} Mar 13 10:39:29.735248 master-0 kubenswrapper[7271]: I0313 10:39:29.735213 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxkl8\" (UniqueName: \"kubernetes.io/projected/1edde4bf-4554-4ab2-b588-513ad84a9bae-kube-api-access-kxkl8\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:29.742306 master-0 kubenswrapper[7271]: I0313 10:39:29.740529 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" event={"ID":"4e6ecc16-19cb-4b66-801f-b958b10d0ce7","Type":"ContainerStarted","Data":"1ee97873740b9b10b1888585dd4cf251d4592642ab8be20585d1c34abd206ca4"} Mar 13 10:39:29.744199 master-0 kubenswrapper[7271]: I0313 10:39:29.743475 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" event={"ID":"253196c1-ea2c-4382-b0fc-c56a8d919b9a","Type":"ContainerStarted","Data":"f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b"} Mar 13 10:39:29.744199 master-0 kubenswrapper[7271]: I0313 10:39:29.743542 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" event={"ID":"253196c1-ea2c-4382-b0fc-c56a8d919b9a","Type":"ContainerStarted","Data":"99b87d3645356bd9adf99abbbc876f3689c165134b9de6ed97fe71d5071ab38a"} Mar 13 10:39:29.751439 master-0 kubenswrapper[7271]: I0313 10:39:29.751387 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" event={"ID":"484e6d0b-d057-4658-8e49-bbe7e6f6ee86","Type":"ContainerStarted","Data":"fcd78f90ad99c247dece0b85a206d1ac457560cacc8ddad5d00adc32257026d1"} Mar 13 10:39:29.767104 master-0 kubenswrapper[7271]: W0313 10:39:29.766849 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod070b85a0_f076_4750_aa00_dabba401dc75.slice/crio-3a7d2e60ddc43ee697baaf390993508cae16887b2a5d4cb1ef47d6c884025855 WatchSource:0}: Error finding container 3a7d2e60ddc43ee697baaf390993508cae16887b2a5d4cb1ef47d6c884025855: Status 404 returned error can't find the container with id 3a7d2e60ddc43ee697baaf390993508cae16887b2a5d4cb1ef47d6c884025855 Mar 13 10:39:29.769994 master-0 kubenswrapper[7271]: I0313 10:39:29.769919 7271 generic.go:334] "Generic (PLEG): container finished" podID="5aa507cf-017d-44f5-8662-77547f82fb51" containerID="ac19f75968e7d0eae52d08a547ded61c84c9448d5897a33d898474c90867405f" exitCode=0 Mar 13 10:39:29.770057 master-0 kubenswrapper[7271]: I0313 10:39:29.770037 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr4ts" event={"ID":"5aa507cf-017d-44f5-8662-77547f82fb51","Type":"ContainerDied","Data":"ac19f75968e7d0eae52d08a547ded61c84c9448d5897a33d898474c90867405f"} Mar 13 10:39:29.786476 master-0 kubenswrapper[7271]: I0313 10:39:29.786429 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:29.812918 master-0 kubenswrapper[7271]: I0313 10:39:29.809426 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb5l4\" (UniqueName: \"kubernetes.io/projected/48f99840-4d9e-49c5-819e-0bb15493feb5-kube-api-access-mb5l4\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:39:29.812918 master-0 kubenswrapper[7271]: I0313 10:39:29.809497 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-config\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:39:29.812918 master-0 kubenswrapper[7271]: I0313 10:39:29.809616 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-images\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:39:29.812918 master-0 kubenswrapper[7271]: I0313 10:39:29.809660 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/48f99840-4d9e-49c5-819e-0bb15493feb5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:39:29.817445 master-0 kubenswrapper[7271]: I0313 10:39:29.817368 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd"] Mar 13 10:39:29.829790 master-0 kubenswrapper[7271]: I0313 10:39:29.829726 7271 generic.go:334] "Generic (PLEG): container finished" podID="2a05e72d-836f-40e0-8a5c-ee02dce494b3" containerID="490778339a50279f0baab46d399265e6afeef4d74597e2f61bb4cc2c5373d122" exitCode=0 Mar 13 10:39:29.829868 master-0 kubenswrapper[7271]: I0313 10:39:29.829800 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrztj" event={"ID":"2a05e72d-836f-40e0-8a5c-ee02dce494b3","Type":"ContainerDied","Data":"490778339a50279f0baab46d399265e6afeef4d74597e2f61bb4cc2c5373d122"} Mar 13 10:39:29.860409 master-0 kubenswrapper[7271]: I0313 10:39:29.860312 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm"] Mar 13 10:39:29.862110 master-0 kubenswrapper[7271]: W0313 10:39:29.861189 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9c4a7b4_28f2_4dcb_bdba_e23a67b79c33.slice/crio-52b02d0e4a7c2c479465f8242f01da199717910ea7b898fbcda40528a83b169e WatchSource:0}: Error finding container 52b02d0e4a7c2c479465f8242f01da199717910ea7b898fbcda40528a83b169e: Status 404 returned error can't find the container with id 52b02d0e4a7c2c479465f8242f01da199717910ea7b898fbcda40528a83b169e Mar 13 10:39:29.911759 master-0 kubenswrapper[7271]: I0313 10:39:29.911530 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb5l4\" (UniqueName: \"kubernetes.io/projected/48f99840-4d9e-49c5-819e-0bb15493feb5-kube-api-access-mb5l4\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:39:29.911759 master-0 kubenswrapper[7271]: I0313 10:39:29.911639 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-config\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:39:29.912086 master-0 kubenswrapper[7271]: I0313 10:39:29.911922 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-images\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:39:29.912086 master-0 kubenswrapper[7271]: I0313 10:39:29.912043 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/48f99840-4d9e-49c5-819e-0bb15493feb5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:39:29.913224 master-0 kubenswrapper[7271]: I0313 10:39:29.912801 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-config\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:39:29.914363 master-0 kubenswrapper[7271]: I0313 10:39:29.914303 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-images\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:39:29.919694 master-0 kubenswrapper[7271]: I0313 10:39:29.919618 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/48f99840-4d9e-49c5-819e-0bb15493feb5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:39:29.932725 master-0 kubenswrapper[7271]: I0313 10:39:29.932647 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb5l4\" (UniqueName: \"kubernetes.io/projected/48f99840-4d9e-49c5-819e-0bb15493feb5-kube-api-access-mb5l4\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:39:30.111635 master-0 kubenswrapper[7271]: I0313 10:39:30.111189 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m"] Mar 13 10:39:30.154198 master-0 kubenswrapper[7271]: I0313 10:39:30.154141 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs"] Mar 13 10:39:30.157869 master-0 kubenswrapper[7271]: I0313 10:39:30.157798 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-nhsd9"] Mar 13 10:39:30.179300 master-0 kubenswrapper[7271]: I0313 10:39:30.172649 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:39:30.188765 master-0 kubenswrapper[7271]: W0313 10:39:30.188722 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9da11462_a91d_4d02_8614_78b4c5b2f7e2.slice/crio-e1e2b9079d6118c595611f9ff8bcc0950650ff2a128d6d9e608f418bd87daef1 WatchSource:0}: Error finding container e1e2b9079d6118c595611f9ff8bcc0950650ff2a128d6d9e608f418bd87daef1: Status 404 returned error can't find the container with id e1e2b9079d6118c595611f9ff8bcc0950650ff2a128d6d9e608f418bd87daef1 Mar 13 10:39:30.188992 master-0 kubenswrapper[7271]: W0313 10:39:30.188935 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc87545aa_11c2_4e6e_8c13_16eeff3be83b.slice/crio-3ae050b82d12b25aa641b68f4f3b48f026796f7cf0455a63a6e8d7c183a407db WatchSource:0}: Error finding container 3ae050b82d12b25aa641b68f4f3b48f026796f7cf0455a63a6e8d7c183a407db: Status 404 returned error can't find the container with id 3ae050b82d12b25aa641b68f4f3b48f026796f7cf0455a63a6e8d7c183a407db Mar 13 10:39:30.461550 master-0 kubenswrapper[7271]: I0313 10:39:30.461479 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq"] Mar 13 10:39:30.593227 master-0 kubenswrapper[7271]: I0313 10:39:30.593164 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz"] Mar 13 10:39:30.602752 master-0 kubenswrapper[7271]: W0313 10:39:30.602478 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48f99840_4d9e_49c5_819e_0bb15493feb5.slice/crio-11fee05d2806af61a462a41f2f8d14b1a8fc382251199b04114ff9afb908d5a1 WatchSource:0}: Error finding container 11fee05d2806af61a462a41f2f8d14b1a8fc382251199b04114ff9afb908d5a1: Status 404 returned error can't find the container with id 11fee05d2806af61a462a41f2f8d14b1a8fc382251199b04114ff9afb908d5a1 Mar 13 10:39:30.816898 master-0 kubenswrapper[7271]: I0313 10:39:30.816817 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:39:30.852627 master-0 kubenswrapper[7271]: I0313 10:39:30.852513 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" event={"ID":"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33","Type":"ContainerStarted","Data":"1be5a87e684f26f690b597a33ddff1c2cf6eee03aefd648e7c215946dfa8bbdc"} Mar 13 10:39:30.852843 master-0 kubenswrapper[7271]: I0313 10:39:30.852648 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" event={"ID":"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33","Type":"ContainerStarted","Data":"52b02d0e4a7c2c479465f8242f01da199717910ea7b898fbcda40528a83b169e"} Mar 13 10:39:30.858035 master-0 kubenswrapper[7271]: I0313 10:39:30.857980 7271 generic.go:334] "Generic (PLEG): container finished" podID="beee81ef-5a3a-4df2-85d5-2573679d261f" containerID="5981b0f268f1a64d5e07b672a70671406c05cd6d7d9cce3115bdfd6054d046d6" exitCode=0 Mar 13 10:39:30.858220 master-0 kubenswrapper[7271]: I0313 10:39:30.858062 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdzpd" event={"ID":"beee81ef-5a3a-4df2-85d5-2573679d261f","Type":"ContainerDied","Data":"5981b0f268f1a64d5e07b672a70671406c05cd6d7d9cce3115bdfd6054d046d6"} Mar 13 10:39:30.867812 master-0 kubenswrapper[7271]: I0313 10:39:30.867641 7271 generic.go:334] "Generic (PLEG): container finished" podID="4a1b43c4-55b9-4c72-ba7c-9089bf28cf16" containerID="a89f2be4905476a4b6dbbd07f3ca4359a228444679e496e247030ce754dfdd31" exitCode=0 Mar 13 10:39:30.867980 master-0 kubenswrapper[7271]: I0313 10:39:30.867709 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgvrc" event={"ID":"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16","Type":"ContainerDied","Data":"a89f2be4905476a4b6dbbd07f3ca4359a228444679e496e247030ce754dfdd31"} Mar 13 10:39:30.878837 master-0 kubenswrapper[7271]: I0313 10:39:30.878738 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" event={"ID":"9da11462-a91d-4d02-8614-78b4c5b2f7e2","Type":"ContainerStarted","Data":"e1e2b9079d6118c595611f9ff8bcc0950650ff2a128d6d9e608f418bd87daef1"} Mar 13 10:39:30.886497 master-0 kubenswrapper[7271]: I0313 10:39:30.886422 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" event={"ID":"48f99840-4d9e-49c5-819e-0bb15493feb5","Type":"ContainerStarted","Data":"4ad4426d4c03d33abea8e37a87af888183f674f424ac51a44bbf0c866150aada"} Mar 13 10:39:30.886497 master-0 kubenswrapper[7271]: I0313 10:39:30.886496 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" event={"ID":"48f99840-4d9e-49c5-819e-0bb15493feb5","Type":"ContainerStarted","Data":"11fee05d2806af61a462a41f2f8d14b1a8fc382251199b04114ff9afb908d5a1"} Mar 13 10:39:30.889925 master-0 kubenswrapper[7271]: I0313 10:39:30.889885 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" event={"ID":"d0f42a72-24c7-49e6-8edb-97b2b0d6183a","Type":"ContainerStarted","Data":"388f8c59b471559999e546dfbc9763f7a6ee3ac7c4ee5cedac0a845be08290cd"} Mar 13 10:39:30.889925 master-0 kubenswrapper[7271]: I0313 10:39:30.889927 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" event={"ID":"d0f42a72-24c7-49e6-8edb-97b2b0d6183a","Type":"ContainerStarted","Data":"d13596a56d4b7303ec265a6d08c85fbe9795571675ab43829e0e95ae8ae9fbbf"} Mar 13 10:39:30.890152 master-0 kubenswrapper[7271]: I0313 10:39:30.889940 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" event={"ID":"d0f42a72-24c7-49e6-8edb-97b2b0d6183a","Type":"ContainerStarted","Data":"09802d7d0a05bccad87d5ddf8cff0a47cdae0568f0f82013285bb0d1dc8f5424"} Mar 13 10:39:30.902877 master-0 kubenswrapper[7271]: I0313 10:39:30.902731 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" event={"ID":"070b85a0-f076-4750-aa00-dabba401dc75","Type":"ContainerStarted","Data":"3a7d2e60ddc43ee697baaf390993508cae16887b2a5d4cb1ef47d6c884025855"} Mar 13 10:39:30.910398 master-0 kubenswrapper[7271]: I0313 10:39:30.910321 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" podStartSLOduration=1.9103047819999999 podStartE2EDuration="1.910304782s" podCreationTimestamp="2026-03-13 10:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:39:30.906024389 +0000 UTC m=+225.432846789" watchObservedRunningTime="2026-03-13 10:39:30.910304782 +0000 UTC m=+225.437127172" Mar 13 10:39:30.912691 master-0 kubenswrapper[7271]: I0313 10:39:30.912639 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" event={"ID":"4e6ecc16-19cb-4b66-801f-b958b10d0ce7","Type":"ContainerStarted","Data":"7d45be0a66921912bf922dc0da9fc69dbc2f555b4d277f214d621b8b7b95aefb"} Mar 13 10:39:30.923808 master-0 kubenswrapper[7271]: I0313 10:39:30.923758 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" event={"ID":"86774fd7-7c26-4b41-badb-de1004397637","Type":"ContainerStarted","Data":"b307d791edfe64a9b684cb84f780359a81088e7f734461ffa9d77ba51707349a"} Mar 13 10:39:30.945784 master-0 kubenswrapper[7271]: I0313 10:39:30.945707 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" event={"ID":"c87545aa-11c2-4e6e-8c13-16eeff3be83b","Type":"ContainerStarted","Data":"3ae050b82d12b25aa641b68f4f3b48f026796f7cf0455a63a6e8d7c183a407db"} Mar 13 10:39:30.950481 master-0 kubenswrapper[7271]: I0313 10:39:30.950423 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" event={"ID":"1edde4bf-4554-4ab2-b588-513ad84a9bae","Type":"ContainerStarted","Data":"aafacbacf378665ba4f0de6c7d10c77484060cd0e36f3c783961efbdae6cc348"} Mar 13 10:39:30.950624 master-0 kubenswrapper[7271]: I0313 10:39:30.950508 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" event={"ID":"1edde4bf-4554-4ab2-b588-513ad84a9bae","Type":"ContainerStarted","Data":"a283bd1cab37da2c35528d1fc1a0a03b24555657ec54c53a6d0fcce5a530df6a"} Mar 13 10:39:30.953270 master-0 kubenswrapper[7271]: I0313 10:39:30.953198 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:30.974250 master-0 kubenswrapper[7271]: I0313 10:39:30.973439 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" podStartSLOduration=1.973415502 podStartE2EDuration="1.973415502s" podCreationTimestamp="2026-03-13 10:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:39:30.969069497 +0000 UTC m=+225.495891887" watchObservedRunningTime="2026-03-13 10:39:30.973415502 +0000 UTC m=+225.500237892" Mar 13 10:39:31.524598 master-0 kubenswrapper[7271]: I0313 10:39:31.524504 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:39:33.255248 master-0 kubenswrapper[7271]: I0313 10:39:33.255195 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-gdfnq"] Mar 13 10:39:33.256814 master-0 kubenswrapper[7271]: I0313 10:39:33.256731 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:39:33.258946 master-0 kubenswrapper[7271]: I0313 10:39:33.258904 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-6rglb" Mar 13 10:39:33.260256 master-0 kubenswrapper[7271]: I0313 10:39:33.260229 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 10:39:33.277190 master-0 kubenswrapper[7271]: I0313 10:39:33.277126 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9zz\" (UniqueName: \"kubernetes.io/projected/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-kube-api-access-xg9zz\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:39:33.277433 master-0 kubenswrapper[7271]: I0313 10:39:33.277214 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-proxy-tls\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:39:33.277433 master-0 kubenswrapper[7271]: I0313 10:39:33.277241 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-rootfs\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:39:33.277433 master-0 kubenswrapper[7271]: I0313 10:39:33.277288 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-mcd-auth-proxy-config\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:39:33.380596 master-0 kubenswrapper[7271]: I0313 10:39:33.380491 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg9zz\" (UniqueName: \"kubernetes.io/projected/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-kube-api-access-xg9zz\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:39:33.380596 master-0 kubenswrapper[7271]: I0313 10:39:33.380609 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-proxy-tls\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:39:33.380951 master-0 kubenswrapper[7271]: I0313 10:39:33.380868 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-rootfs\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:39:33.381185 master-0 kubenswrapper[7271]: I0313 10:39:33.380676 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-rootfs\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:39:33.381330 master-0 kubenswrapper[7271]: I0313 10:39:33.381291 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-mcd-auth-proxy-config\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:39:33.382777 master-0 kubenswrapper[7271]: I0313 10:39:33.382750 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-mcd-auth-proxy-config\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:39:33.393681 master-0 kubenswrapper[7271]: I0313 10:39:33.393615 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-proxy-tls\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:39:33.397819 master-0 kubenswrapper[7271]: I0313 10:39:33.397781 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg9zz\" (UniqueName: \"kubernetes.io/projected/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-kube-api-access-xg9zz\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:39:33.573974 master-0 kubenswrapper[7271]: I0313 10:39:33.573911 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:39:57.720693 master-0 kubenswrapper[7271]: W0313 10:39:57.720635 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60e17cd1_c520_4d8d_8c72_47bf73b8cc66.slice/crio-824d2e06fb02d75eff387f4090fa04e983a89eabed59a10155690e2b0750ea37 WatchSource:0}: Error finding container 824d2e06fb02d75eff387f4090fa04e983a89eabed59a10155690e2b0750ea37: Status 404 returned error can't find the container with id 824d2e06fb02d75eff387f4090fa04e983a89eabed59a10155690e2b0750ea37 Mar 13 10:39:58.364817 master-0 kubenswrapper[7271]: I0313 10:39:58.361817 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdzpd" event={"ID":"beee81ef-5a3a-4df2-85d5-2573679d261f","Type":"ContainerStarted","Data":"d8056540594381501d1c85e917a63fa0ebf9fa72658b872d099f0814d972bba3"} Mar 13 10:39:58.372965 master-0 kubenswrapper[7271]: I0313 10:39:58.367305 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" event={"ID":"9da11462-a91d-4d02-8614-78b4c5b2f7e2","Type":"ContainerStarted","Data":"00da2a7b5527973fbd194100f44590333c80d5dcf0e49c8db3fcca2c086cc934"} Mar 13 10:39:58.372965 master-0 kubenswrapper[7271]: I0313 10:39:58.370732 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" event={"ID":"070b85a0-f076-4750-aa00-dabba401dc75","Type":"ContainerStarted","Data":"594cd9998ea936cf92d6d0f81aec77530767beeb227080ba41181e70dc234520"} Mar 13 10:39:58.417649 master-0 kubenswrapper[7271]: I0313 10:39:58.414372 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jdzpd" podStartSLOduration=5.560520752 podStartE2EDuration="1m0.414347922s" podCreationTimestamp="2026-03-13 10:38:58 +0000 UTC" firstStartedPulling="2026-03-13 10:39:02.957163561 +0000 UTC m=+197.483985951" lastFinishedPulling="2026-03-13 10:39:57.810990731 +0000 UTC m=+252.337813121" observedRunningTime="2026-03-13 10:39:58.409765871 +0000 UTC m=+252.936588281" watchObservedRunningTime="2026-03-13 10:39:58.414347922 +0000 UTC m=+252.941170312" Mar 13 10:39:58.419050 master-0 kubenswrapper[7271]: I0313 10:39:58.419009 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" event={"ID":"86774fd7-7c26-4b41-badb-de1004397637","Type":"ContainerStarted","Data":"321db24f6b88097e81d1f74a773688f6847398cb17d43e1232e0cf0f8762ba18"} Mar 13 10:39:58.437659 master-0 kubenswrapper[7271]: I0313 10:39:58.437087 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" podStartSLOduration=2.868217449 podStartE2EDuration="30.437064193s" podCreationTimestamp="2026-03-13 10:39:28 +0000 UTC" firstStartedPulling="2026-03-13 10:39:30.192020731 +0000 UTC m=+224.718843121" lastFinishedPulling="2026-03-13 10:39:57.760867475 +0000 UTC m=+252.287689865" observedRunningTime="2026-03-13 10:39:58.434326501 +0000 UTC m=+252.961148891" watchObservedRunningTime="2026-03-13 10:39:58.437064193 +0000 UTC m=+252.963886583" Mar 13 10:39:58.465807 master-0 kubenswrapper[7271]: I0313 10:39:58.465413 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bgvrc" event={"ID":"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16","Type":"ContainerStarted","Data":"97714db43e30caa82ba0cd46b9cd09ae5a1f07ea7548f3706a321cf68a32a334"} Mar 13 10:39:58.474904 master-0 kubenswrapper[7271]: I0313 10:39:58.474137 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vr4ts" podStartSLOduration=8.596614695 podStartE2EDuration="1m3.474118854s" podCreationTimestamp="2026-03-13 10:38:55 +0000 UTC" firstStartedPulling="2026-03-13 10:39:02.95119844 +0000 UTC m=+197.478020830" lastFinishedPulling="2026-03-13 10:39:57.828702599 +0000 UTC m=+252.355524989" observedRunningTime="2026-03-13 10:39:58.471461843 +0000 UTC m=+252.998284233" watchObservedRunningTime="2026-03-13 10:39:58.474118854 +0000 UTC m=+253.000941244" Mar 13 10:39:58.492320 master-0 kubenswrapper[7271]: I0313 10:39:58.492245 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" event={"ID":"60e17cd1-c520-4d8d-8c72-47bf73b8cc66","Type":"ContainerStarted","Data":"358e8f9d12616d14638ef78a81f2cc8687a3bdc3ea509d3426c2865390df493b"} Mar 13 10:39:58.492320 master-0 kubenswrapper[7271]: I0313 10:39:58.492297 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" event={"ID":"60e17cd1-c520-4d8d-8c72-47bf73b8cc66","Type":"ContainerStarted","Data":"824d2e06fb02d75eff387f4090fa04e983a89eabed59a10155690e2b0750ea37"} Mar 13 10:39:58.503397 master-0 kubenswrapper[7271]: I0313 10:39:58.503321 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bgvrc" podStartSLOduration=8.705815987 podStartE2EDuration="1m3.503297055s" podCreationTimestamp="2026-03-13 10:38:55 +0000 UTC" firstStartedPulling="2026-03-13 10:39:02.960118471 +0000 UTC m=+197.486940861" lastFinishedPulling="2026-03-13 10:39:57.757599539 +0000 UTC m=+252.284421929" observedRunningTime="2026-03-13 10:39:58.501164259 +0000 UTC m=+253.027986649" watchObservedRunningTime="2026-03-13 10:39:58.503297055 +0000 UTC m=+253.030119445" Mar 13 10:39:58.519998 master-0 kubenswrapper[7271]: I0313 10:39:58.519936 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" event={"ID":"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33","Type":"ContainerStarted","Data":"33f485f0f2a1052d43c6456fe1c55f48c0eae8c08bc7615626d7dbf11fd3c26a"} Mar 13 10:39:58.539465 master-0 kubenswrapper[7271]: I0313 10:39:58.537771 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" podStartSLOduration=25.537745237 podStartE2EDuration="25.537745237s" podCreationTimestamp="2026-03-13 10:39:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:39:58.536777551 +0000 UTC m=+253.063599961" watchObservedRunningTime="2026-03-13 10:39:58.537745237 +0000 UTC m=+253.064567637" Mar 13 10:39:58.570248 master-0 kubenswrapper[7271]: I0313 10:39:58.570193 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" event={"ID":"48f99840-4d9e-49c5-819e-0bb15493feb5","Type":"ContainerStarted","Data":"3db54e90276a64402967c0bc59c00901e01327339bb78dd658883ac9c02f925f"} Mar 13 10:39:58.581615 master-0 kubenswrapper[7271]: I0313 10:39:58.579939 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" event={"ID":"253196c1-ea2c-4382-b0fc-c56a8d919b9a","Type":"ContainerStarted","Data":"f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511"} Mar 13 10:39:58.590226 master-0 kubenswrapper[7271]: I0313 10:39:58.589556 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" event={"ID":"484e6d0b-d057-4658-8e49-bbe7e6f6ee86","Type":"ContainerStarted","Data":"06f340bfe3defa99f6d96411a1e67581d7833b82a603be2ce7a6f91338e36131"} Mar 13 10:39:58.591480 master-0 kubenswrapper[7271]: I0313 10:39:58.591445 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" event={"ID":"c87545aa-11c2-4e6e-8c13-16eeff3be83b","Type":"ContainerStarted","Data":"a54ca7738955f7ec185b4cde3784d0158686a36edc078876172035717347c129"} Mar 13 10:39:58.612991 master-0 kubenswrapper[7271]: I0313 10:39:58.609764 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" podStartSLOduration=3.163223382 podStartE2EDuration="30.609741041s" podCreationTimestamp="2026-03-13 10:39:28 +0000 UTC" firstStartedPulling="2026-03-13 10:39:30.24795577 +0000 UTC m=+224.774778160" lastFinishedPulling="2026-03-13 10:39:57.694473429 +0000 UTC m=+252.221295819" observedRunningTime="2026-03-13 10:39:58.570314508 +0000 UTC m=+253.097136898" watchObservedRunningTime="2026-03-13 10:39:58.609741041 +0000 UTC m=+253.136563431" Mar 13 10:39:58.612991 master-0 kubenswrapper[7271]: I0313 10:39:58.610244 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" podStartSLOduration=2.594715439 podStartE2EDuration="29.610238705s" podCreationTimestamp="2026-03-13 10:39:29 +0000 UTC" firstStartedPulling="2026-03-13 10:39:30.743318355 +0000 UTC m=+225.270140745" lastFinishedPulling="2026-03-13 10:39:57.758841621 +0000 UTC m=+252.285664011" observedRunningTime="2026-03-13 10:39:58.608003465 +0000 UTC m=+253.134825855" watchObservedRunningTime="2026-03-13 10:39:58.610238705 +0000 UTC m=+253.137061095" Mar 13 10:39:58.650667 master-0 kubenswrapper[7271]: I0313 10:39:58.644717 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" podStartSLOduration=14.318279181 podStartE2EDuration="42.644695806s" podCreationTimestamp="2026-03-13 10:39:16 +0000 UTC" firstStartedPulling="2026-03-13 10:39:29.387985371 +0000 UTC m=+223.914807761" lastFinishedPulling="2026-03-13 10:39:57.714401996 +0000 UTC m=+252.241224386" observedRunningTime="2026-03-13 10:39:58.641907682 +0000 UTC m=+253.168730072" watchObservedRunningTime="2026-03-13 10:39:58.644695806 +0000 UTC m=+253.171518196" Mar 13 10:39:58.752693 master-0 kubenswrapper[7271]: I0313 10:39:58.748138 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" podStartSLOduration=10.569907948 podStartE2EDuration="38.748106512s" podCreationTimestamp="2026-03-13 10:39:20 +0000 UTC" firstStartedPulling="2026-03-13 10:39:29.536441818 +0000 UTC m=+224.063264208" lastFinishedPulling="2026-03-13 10:39:57.714640382 +0000 UTC m=+252.241462772" observedRunningTime="2026-03-13 10:39:58.679333992 +0000 UTC m=+253.206156402" watchObservedRunningTime="2026-03-13 10:39:58.748106512 +0000 UTC m=+253.274928892" Mar 13 10:39:58.752693 master-0 kubenswrapper[7271]: I0313 10:39:58.749447 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" podStartSLOduration=3.184525626 podStartE2EDuration="30.749439557s" podCreationTimestamp="2026-03-13 10:39:28 +0000 UTC" firstStartedPulling="2026-03-13 10:39:30.193933391 +0000 UTC m=+224.720755781" lastFinishedPulling="2026-03-13 10:39:57.758847322 +0000 UTC m=+252.285669712" observedRunningTime="2026-03-13 10:39:58.738039775 +0000 UTC m=+253.264862165" watchObservedRunningTime="2026-03-13 10:39:58.749439557 +0000 UTC m=+253.276261947" Mar 13 10:39:59.599793 master-0 kubenswrapper[7271]: I0313 10:39:59.599485 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" event={"ID":"60e17cd1-c520-4d8d-8c72-47bf73b8cc66","Type":"ContainerStarted","Data":"0cc8f6d8bb88b316cc22791ec0cd8743485bf795ad55a8e6df497cb6f50d524c"} Mar 13 10:39:59.602083 master-0 kubenswrapper[7271]: I0313 10:39:59.601862 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" event={"ID":"070b85a0-f076-4750-aa00-dabba401dc75","Type":"ContainerStarted","Data":"1a494b3ca3cb1761fd57fb1b615d10b57bd5d1553f92457c49c194b1355c49de"} Mar 13 10:39:59.604368 master-0 kubenswrapper[7271]: I0313 10:39:59.604160 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" event={"ID":"4e6ecc16-19cb-4b66-801f-b958b10d0ce7","Type":"ContainerStarted","Data":"11cd86e66b36bc132e0e7aaa43855f08b41f8200a5328efc0bb0a3e898e5ef11"} Mar 13 10:39:59.606007 master-0 kubenswrapper[7271]: I0313 10:39:59.605941 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" event={"ID":"86774fd7-7c26-4b41-badb-de1004397637","Type":"ContainerStarted","Data":"8ba91ada12cb3ff40f925459c24bccedf3e0b69c48222b411f60e52bb11243d2"} Mar 13 10:39:59.609080 master-0 kubenswrapper[7271]: I0313 10:39:59.609044 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr4ts" event={"ID":"5aa507cf-017d-44f5-8662-77547f82fb51","Type":"ContainerStarted","Data":"7fcf3dd22107ed0311c136f2ec9aee51ad7e8c0c8637c647825544a19742fa66"} Mar 13 10:39:59.617015 master-0 kubenswrapper[7271]: I0313 10:39:59.616748 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrztj" event={"ID":"2a05e72d-836f-40e0-8a5c-ee02dce494b3","Type":"ContainerStarted","Data":"f0a1ec35e242ee7d27745728a76168b55ed589005642964b1322a83b31d055b6"} Mar 13 10:39:59.619782 master-0 kubenswrapper[7271]: I0313 10:39:59.619744 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" event={"ID":"e9f64995-fa1e-4205-981b-be7a7ae67115","Type":"ContainerStarted","Data":"f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43"} Mar 13 10:39:59.619953 master-0 kubenswrapper[7271]: I0313 10:39:59.619933 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" event={"ID":"e9f64995-fa1e-4205-981b-be7a7ae67115","Type":"ContainerStarted","Data":"100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7"} Mar 13 10:39:59.620046 master-0 kubenswrapper[7271]: I0313 10:39:59.620029 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" event={"ID":"e9f64995-fa1e-4205-981b-be7a7ae67115","Type":"ContainerStarted","Data":"9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32"} Mar 13 10:39:59.627526 master-0 kubenswrapper[7271]: I0313 10:39:59.627445 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" podStartSLOduration=4.767041867 podStartE2EDuration="32.627420333s" podCreationTimestamp="2026-03-13 10:39:27 +0000 UTC" firstStartedPulling="2026-03-13 10:39:29.834118723 +0000 UTC m=+224.360941103" lastFinishedPulling="2026-03-13 10:39:57.694497179 +0000 UTC m=+252.221319569" observedRunningTime="2026-03-13 10:39:59.623868149 +0000 UTC m=+254.150690539" watchObservedRunningTime="2026-03-13 10:39:59.627420333 +0000 UTC m=+254.154242723" Mar 13 10:39:59.708919 master-0 kubenswrapper[7271]: I0313 10:39:59.708814 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" podStartSLOduration=5.651532072 podStartE2EDuration="33.708788806s" podCreationTimestamp="2026-03-13 10:39:26 +0000 UTC" firstStartedPulling="2026-03-13 10:39:29.739161931 +0000 UTC m=+224.265984321" lastFinishedPulling="2026-03-13 10:39:57.796418665 +0000 UTC m=+252.323241055" observedRunningTime="2026-03-13 10:39:59.656177454 +0000 UTC m=+254.182999844" watchObservedRunningTime="2026-03-13 10:39:59.708788806 +0000 UTC m=+254.235611196" Mar 13 10:39:59.709156 master-0 kubenswrapper[7271]: I0313 10:39:59.708967 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" podStartSLOduration=2.577929094 podStartE2EDuration="30.708961401s" podCreationTimestamp="2026-03-13 10:39:29 +0000 UTC" firstStartedPulling="2026-03-13 10:39:29.628200375 +0000 UTC m=+224.155022765" lastFinishedPulling="2026-03-13 10:39:57.759232682 +0000 UTC m=+252.286055072" observedRunningTime="2026-03-13 10:39:59.691814837 +0000 UTC m=+254.218637227" watchObservedRunningTime="2026-03-13 10:39:59.708961401 +0000 UTC m=+254.235783791" Mar 13 10:39:59.725610 master-0 kubenswrapper[7271]: I0313 10:39:59.721204 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mrztj" podStartSLOduration=7.865504127 podStartE2EDuration="1m2.721181704s" podCreationTimestamp="2026-03-13 10:38:57 +0000 UTC" firstStartedPulling="2026-03-13 10:39:02.955258283 +0000 UTC m=+197.482080663" lastFinishedPulling="2026-03-13 10:39:57.81093585 +0000 UTC m=+252.337758240" observedRunningTime="2026-03-13 10:39:59.720150126 +0000 UTC m=+254.246972526" watchObservedRunningTime="2026-03-13 10:39:59.721181704 +0000 UTC m=+254.248004094" Mar 13 10:39:59.750612 master-0 kubenswrapper[7271]: I0313 10:39:59.749780 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" podStartSLOduration=6.22608915 podStartE2EDuration="33.749749919s" podCreationTimestamp="2026-03-13 10:39:26 +0000 UTC" firstStartedPulling="2026-03-13 10:39:30.235675476 +0000 UTC m=+224.762497856" lastFinishedPulling="2026-03-13 10:39:57.759336235 +0000 UTC m=+252.286158625" observedRunningTime="2026-03-13 10:39:59.749730729 +0000 UTC m=+254.276553119" watchObservedRunningTime="2026-03-13 10:39:59.749749919 +0000 UTC m=+254.276572309" Mar 13 10:40:00.449510 master-0 kubenswrapper[7271]: I0313 10:40:00.449441 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:40:00.449510 master-0 kubenswrapper[7271]: I0313 10:40:00.449509 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:40:00.471211 master-0 kubenswrapper[7271]: I0313 10:40:00.470921 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:40:00.471211 master-0 kubenswrapper[7271]: I0313 10:40:00.470986 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:40:00.486658 master-0 kubenswrapper[7271]: I0313 10:40:00.485649 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:40:00.486658 master-0 kubenswrapper[7271]: I0313 10:40:00.485718 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:40:00.806061 master-0 kubenswrapper[7271]: I0313 10:40:00.806006 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q"] Mar 13 10:40:00.806302 master-0 kubenswrapper[7271]: I0313 10:40:00.806250 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" podUID="253196c1-ea2c-4382-b0fc-c56a8d919b9a" containerName="kube-rbac-proxy" containerID="cri-o://f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b" gracePeriod=30 Mar 13 10:40:00.806487 master-0 kubenswrapper[7271]: I0313 10:40:00.806379 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" podUID="253196c1-ea2c-4382-b0fc-c56a8d919b9a" containerName="machine-approver-controller" containerID="cri-o://f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511" gracePeriod=30 Mar 13 10:40:00.964563 master-0 kubenswrapper[7271]: I0313 10:40:00.964516 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:40:01.067934 master-0 kubenswrapper[7271]: I0313 10:40:01.067806 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/253196c1-ea2c-4382-b0fc-c56a8d919b9a-auth-proxy-config\") pod \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " Mar 13 10:40:01.068719 master-0 kubenswrapper[7271]: I0313 10:40:01.068699 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253196c1-ea2c-4382-b0fc-c56a8d919b9a-config\") pod \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " Mar 13 10:40:01.069147 master-0 kubenswrapper[7271]: I0313 10:40:01.069130 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9l2d\" (UniqueName: \"kubernetes.io/projected/253196c1-ea2c-4382-b0fc-c56a8d919b9a-kube-api-access-h9l2d\") pod \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " Mar 13 10:40:01.069669 master-0 kubenswrapper[7271]: I0313 10:40:01.069652 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/253196c1-ea2c-4382-b0fc-c56a8d919b9a-machine-approver-tls\") pod \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\" (UID: \"253196c1-ea2c-4382-b0fc-c56a8d919b9a\") " Mar 13 10:40:01.069877 master-0 kubenswrapper[7271]: I0313 10:40:01.068631 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/253196c1-ea2c-4382-b0fc-c56a8d919b9a-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "253196c1-ea2c-4382-b0fc-c56a8d919b9a" (UID: "253196c1-ea2c-4382-b0fc-c56a8d919b9a"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:40:01.069877 master-0 kubenswrapper[7271]: I0313 10:40:01.069059 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/253196c1-ea2c-4382-b0fc-c56a8d919b9a-config" (OuterVolumeSpecName: "config") pod "253196c1-ea2c-4382-b0fc-c56a8d919b9a" (UID: "253196c1-ea2c-4382-b0fc-c56a8d919b9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:40:01.070404 master-0 kubenswrapper[7271]: I0313 10:40:01.070376 7271 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/253196c1-ea2c-4382-b0fc-c56a8d919b9a-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:40:01.070546 master-0 kubenswrapper[7271]: I0313 10:40:01.070532 7271 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253196c1-ea2c-4382-b0fc-c56a8d919b9a-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:40:01.073847 master-0 kubenswrapper[7271]: I0313 10:40:01.073822 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/253196c1-ea2c-4382-b0fc-c56a8d919b9a-kube-api-access-h9l2d" (OuterVolumeSpecName: "kube-api-access-h9l2d") pod "253196c1-ea2c-4382-b0fc-c56a8d919b9a" (UID: "253196c1-ea2c-4382-b0fc-c56a8d919b9a"). InnerVolumeSpecName "kube-api-access-h9l2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:40:01.084709 master-0 kubenswrapper[7271]: I0313 10:40:01.084650 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/253196c1-ea2c-4382-b0fc-c56a8d919b9a-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "253196c1-ea2c-4382-b0fc-c56a8d919b9a" (UID: "253196c1-ea2c-4382-b0fc-c56a8d919b9a"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:40:01.171834 master-0 kubenswrapper[7271]: I0313 10:40:01.171797 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9l2d\" (UniqueName: \"kubernetes.io/projected/253196c1-ea2c-4382-b0fc-c56a8d919b9a-kube-api-access-h9l2d\") on node \"master-0\" DevicePath \"\"" Mar 13 10:40:01.172255 master-0 kubenswrapper[7271]: I0313 10:40:01.172244 7271 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/253196c1-ea2c-4382-b0fc-c56a8d919b9a-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 10:40:01.493857 master-0 kubenswrapper[7271]: I0313 10:40:01.493653 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bgvrc" podUID="4a1b43c4-55b9-4c72-ba7c-9089bf28cf16" containerName="registry-server" probeResult="failure" output=< Mar 13 10:40:01.493857 master-0 kubenswrapper[7271]: timeout: failed to connect service ":50051" within 1s Mar 13 10:40:01.493857 master-0 kubenswrapper[7271]: > Mar 13 10:40:01.509755 master-0 kubenswrapper[7271]: I0313 10:40:01.509672 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g"] Mar 13 10:40:01.510039 master-0 kubenswrapper[7271]: E0313 10:40:01.509958 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253196c1-ea2c-4382-b0fc-c56a8d919b9a" containerName="kube-rbac-proxy" Mar 13 10:40:01.510039 master-0 kubenswrapper[7271]: I0313 10:40:01.509974 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="253196c1-ea2c-4382-b0fc-c56a8d919b9a" containerName="kube-rbac-proxy" Mar 13 10:40:01.510039 master-0 kubenswrapper[7271]: E0313 10:40:01.510002 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253196c1-ea2c-4382-b0fc-c56a8d919b9a" containerName="machine-approver-controller" Mar 13 10:40:01.510039 master-0 kubenswrapper[7271]: I0313 10:40:01.510009 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="253196c1-ea2c-4382-b0fc-c56a8d919b9a" containerName="machine-approver-controller" Mar 13 10:40:01.510228 master-0 kubenswrapper[7271]: I0313 10:40:01.510124 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="253196c1-ea2c-4382-b0fc-c56a8d919b9a" containerName="machine-approver-controller" Mar 13 10:40:01.510228 master-0 kubenswrapper[7271]: I0313 10:40:01.510140 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="253196c1-ea2c-4382-b0fc-c56a8d919b9a" containerName="kube-rbac-proxy" Mar 13 10:40:01.510999 master-0 kubenswrapper[7271]: I0313 10:40:01.510963 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:40:01.513133 master-0 kubenswrapper[7271]: I0313 10:40:01.513091 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-tf6mr" Mar 13 10:40:01.513417 master-0 kubenswrapper[7271]: I0313 10:40:01.513392 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 10:40:01.521189 master-0 kubenswrapper[7271]: I0313 10:40:01.521109 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g"] Mar 13 10:40:01.529841 master-0 kubenswrapper[7271]: I0313 10:40:01.529751 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vr4ts" podUID="5aa507cf-017d-44f5-8662-77547f82fb51" containerName="registry-server" probeResult="failure" output=< Mar 13 10:40:01.529841 master-0 kubenswrapper[7271]: timeout: failed to connect service ":50051" within 1s Mar 13 10:40:01.529841 master-0 kubenswrapper[7271]: > Mar 13 10:40:01.548405 master-0 kubenswrapper[7271]: I0313 10:40:01.548316 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mrztj" podUID="2a05e72d-836f-40e0-8a5c-ee02dce494b3" containerName="registry-server" probeResult="failure" output=< Mar 13 10:40:01.548405 master-0 kubenswrapper[7271]: timeout: failed to connect service ":50051" within 1s Mar 13 10:40:01.548405 master-0 kubenswrapper[7271]: > Mar 13 10:40:01.577949 master-0 kubenswrapper[7271]: I0313 10:40:01.577901 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:40:01.578339 master-0 kubenswrapper[7271]: I0313 10:40:01.578313 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7v6s\" (UniqueName: \"kubernetes.io/projected/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-kube-api-access-m7v6s\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:40:01.578462 master-0 kubenswrapper[7271]: I0313 10:40:01.578445 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:40:01.631774 master-0 kubenswrapper[7271]: I0313 10:40:01.631732 7271 generic.go:334] "Generic (PLEG): container finished" podID="253196c1-ea2c-4382-b0fc-c56a8d919b9a" containerID="f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511" exitCode=0 Mar 13 10:40:01.632050 master-0 kubenswrapper[7271]: I0313 10:40:01.632035 7271 generic.go:334] "Generic (PLEG): container finished" podID="253196c1-ea2c-4382-b0fc-c56a8d919b9a" containerID="f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b" exitCode=0 Mar 13 10:40:01.632127 master-0 kubenswrapper[7271]: I0313 10:40:01.631792 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" Mar 13 10:40:01.632127 master-0 kubenswrapper[7271]: I0313 10:40:01.631815 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" event={"ID":"253196c1-ea2c-4382-b0fc-c56a8d919b9a","Type":"ContainerDied","Data":"f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511"} Mar 13 10:40:01.632204 master-0 kubenswrapper[7271]: I0313 10:40:01.632152 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" event={"ID":"253196c1-ea2c-4382-b0fc-c56a8d919b9a","Type":"ContainerDied","Data":"f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b"} Mar 13 10:40:01.632204 master-0 kubenswrapper[7271]: I0313 10:40:01.632171 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q" event={"ID":"253196c1-ea2c-4382-b0fc-c56a8d919b9a","Type":"ContainerDied","Data":"99b87d3645356bd9adf99abbbc876f3689c165134b9de6ed97fe71d5071ab38a"} Mar 13 10:40:01.632204 master-0 kubenswrapper[7271]: I0313 10:40:01.632190 7271 scope.go:117] "RemoveContainer" containerID="f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511" Mar 13 10:40:01.645109 master-0 kubenswrapper[7271]: I0313 10:40:01.645083 7271 scope.go:117] "RemoveContainer" containerID="f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b" Mar 13 10:40:01.659756 master-0 kubenswrapper[7271]: I0313 10:40:01.659710 7271 scope.go:117] "RemoveContainer" containerID="f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511" Mar 13 10:40:01.660329 master-0 kubenswrapper[7271]: E0313 10:40:01.660117 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511\": container with ID starting with f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511 not found: ID does not exist" containerID="f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511" Mar 13 10:40:01.660329 master-0 kubenswrapper[7271]: I0313 10:40:01.660154 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511"} err="failed to get container status \"f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511\": rpc error: code = NotFound desc = could not find container \"f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511\": container with ID starting with f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511 not found: ID does not exist" Mar 13 10:40:01.660329 master-0 kubenswrapper[7271]: I0313 10:40:01.660178 7271 scope.go:117] "RemoveContainer" containerID="f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b" Mar 13 10:40:01.660468 master-0 kubenswrapper[7271]: E0313 10:40:01.660374 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b\": container with ID starting with f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b not found: ID does not exist" containerID="f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b" Mar 13 10:40:01.660468 master-0 kubenswrapper[7271]: I0313 10:40:01.660400 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b"} err="failed to get container status \"f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b\": rpc error: code = NotFound desc = could not find container \"f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b\": container with ID starting with f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b not found: ID does not exist" Mar 13 10:40:01.660468 master-0 kubenswrapper[7271]: I0313 10:40:01.660416 7271 scope.go:117] "RemoveContainer" containerID="f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511" Mar 13 10:40:01.661186 master-0 kubenswrapper[7271]: I0313 10:40:01.660725 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511"} err="failed to get container status \"f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511\": rpc error: code = NotFound desc = could not find container \"f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511\": container with ID starting with f96c55f0c8dd3816027168f6f216080d8a9d8cd7f31fcfc7f77723e69489e511 not found: ID does not exist" Mar 13 10:40:01.661186 master-0 kubenswrapper[7271]: I0313 10:40:01.660771 7271 scope.go:117] "RemoveContainer" containerID="f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b" Mar 13 10:40:01.661186 master-0 kubenswrapper[7271]: I0313 10:40:01.661028 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b"} err="failed to get container status \"f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b\": rpc error: code = NotFound desc = could not find container \"f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b\": container with ID starting with f719882dc495b29409a4eba28a794b5bdb47eb164a7db5c0e50970280a97169b not found: ID does not exist" Mar 13 10:40:01.668397 master-0 kubenswrapper[7271]: I0313 10:40:01.668182 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q"] Mar 13 10:40:01.674316 master-0 kubenswrapper[7271]: I0313 10:40:01.674252 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-bkp8q"] Mar 13 10:40:01.680411 master-0 kubenswrapper[7271]: I0313 10:40:01.680054 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7v6s\" (UniqueName: \"kubernetes.io/projected/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-kube-api-access-m7v6s\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:40:01.680411 master-0 kubenswrapper[7271]: I0313 10:40:01.680099 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:40:01.680411 master-0 kubenswrapper[7271]: I0313 10:40:01.680137 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:40:01.681925 master-0 kubenswrapper[7271]: I0313 10:40:01.681891 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:40:01.685365 master-0 kubenswrapper[7271]: I0313 10:40:01.685313 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:40:01.696726 master-0 kubenswrapper[7271]: I0313 10:40:01.696642 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f"] Mar 13 10:40:01.697838 master-0 kubenswrapper[7271]: I0313 10:40:01.697807 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:40:01.700538 master-0 kubenswrapper[7271]: I0313 10:40:01.700490 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 10:40:01.700735 master-0 kubenswrapper[7271]: I0313 10:40:01.700684 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-xdq92" Mar 13 10:40:01.700782 master-0 kubenswrapper[7271]: I0313 10:40:01.700719 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 10:40:01.700858 master-0 kubenswrapper[7271]: I0313 10:40:01.700831 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 10:40:01.701032 master-0 kubenswrapper[7271]: I0313 10:40:01.701006 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 10:40:01.701086 master-0 kubenswrapper[7271]: I0313 10:40:01.701067 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 10:40:01.705537 master-0 kubenswrapper[7271]: I0313 10:40:01.705483 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7v6s\" (UniqueName: \"kubernetes.io/projected/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-kube-api-access-m7v6s\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:40:01.781567 master-0 kubenswrapper[7271]: I0313 10:40:01.781432 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2znn\" (UniqueName: \"kubernetes.io/projected/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-kube-api-access-s2znn\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:40:01.781567 master-0 kubenswrapper[7271]: I0313 10:40:01.781517 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-config\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:40:01.781567 master-0 kubenswrapper[7271]: I0313 10:40:01.781566 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:40:01.781955 master-0 kubenswrapper[7271]: I0313 10:40:01.781890 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:40:01.844789 master-0 kubenswrapper[7271]: I0313 10:40:01.844722 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:40:01.884500 master-0 kubenswrapper[7271]: I0313 10:40:01.884424 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:40:01.884825 master-0 kubenswrapper[7271]: I0313 10:40:01.884722 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2znn\" (UniqueName: \"kubernetes.io/projected/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-kube-api-access-s2znn\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:40:01.885067 master-0 kubenswrapper[7271]: I0313 10:40:01.885030 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-config\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:40:01.885128 master-0 kubenswrapper[7271]: I0313 10:40:01.885105 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:40:01.885880 master-0 kubenswrapper[7271]: I0313 10:40:01.885774 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:40:01.886173 master-0 kubenswrapper[7271]: I0313 10:40:01.886146 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-config\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:40:01.888376 master-0 kubenswrapper[7271]: I0313 10:40:01.888336 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:40:01.909161 master-0 kubenswrapper[7271]: I0313 10:40:01.909036 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2znn\" (UniqueName: \"kubernetes.io/projected/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-kube-api-access-s2znn\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:40:02.018774 master-0 kubenswrapper[7271]: I0313 10:40:02.018702 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:40:02.039916 master-0 kubenswrapper[7271]: W0313 10:40:02.039839 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec121f87_93ea_468c_a25f_2ec5e7d0e0ee.slice/crio-60b98cd864b31c5d3ce33f7e617eaf280215ab256f27e08a3aa813b955cd4550 WatchSource:0}: Error finding container 60b98cd864b31c5d3ce33f7e617eaf280215ab256f27e08a3aa813b955cd4550: Status 404 returned error can't find the container with id 60b98cd864b31c5d3ce33f7e617eaf280215ab256f27e08a3aa813b955cd4550 Mar 13 10:40:02.245698 master-0 kubenswrapper[7271]: I0313 10:40:02.245618 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g"] Mar 13 10:40:02.255862 master-0 kubenswrapper[7271]: W0313 10:40:02.255817 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26cc0e72_8b4f_4087_89b9_05d2cf6df3f6.slice/crio-326eff89f13ef648d073ebec6b104b4118323f6c130c4cf1f4122764a419957e WatchSource:0}: Error finding container 326eff89f13ef648d073ebec6b104b4118323f6c130c4cf1f4122764a419957e: Status 404 returned error can't find the container with id 326eff89f13ef648d073ebec6b104b4118323f6c130c4cf1f4122764a419957e Mar 13 10:40:02.265838 master-0 kubenswrapper[7271]: I0313 10:40:02.265791 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:40:02.265838 master-0 kubenswrapper[7271]: I0313 10:40:02.265837 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:40:02.535615 master-0 kubenswrapper[7271]: I0313 10:40:02.533451 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-jbx9v"] Mar 13 10:40:02.535615 master-0 kubenswrapper[7271]: I0313 10:40:02.535376 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-jbx9v" Mar 13 10:40:02.543096 master-0 kubenswrapper[7271]: I0313 10:40:02.541964 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-79f8cd6fdd-b4x54"] Mar 13 10:40:02.549210 master-0 kubenswrapper[7271]: I0313 10:40:02.549157 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.551950 master-0 kubenswrapper[7271]: I0313 10:40:02.551099 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt"] Mar 13 10:40:02.557668 master-0 kubenswrapper[7271]: I0313 10:40:02.554617 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 10:40:02.557668 master-0 kubenswrapper[7271]: I0313 10:40:02.554811 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 10:40:02.557668 master-0 kubenswrapper[7271]: I0313 10:40:02.554865 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 10:40:02.557668 master-0 kubenswrapper[7271]: I0313 10:40:02.555000 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-x42l5" Mar 13 10:40:02.557668 master-0 kubenswrapper[7271]: I0313 10:40:02.555046 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 10:40:02.557668 master-0 kubenswrapper[7271]: I0313 10:40:02.555133 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 10:40:02.557668 master-0 kubenswrapper[7271]: I0313 10:40:02.555393 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 10:40:02.557668 master-0 kubenswrapper[7271]: I0313 10:40:02.556154 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt" Mar 13 10:40:02.557668 master-0 kubenswrapper[7271]: I0313 10:40:02.556551 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt"] Mar 13 10:40:02.559739 master-0 kubenswrapper[7271]: I0313 10:40:02.559490 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-lr8wh" Mar 13 10:40:02.559739 master-0 kubenswrapper[7271]: I0313 10:40:02.559648 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 13 10:40:02.559739 master-0 kubenswrapper[7271]: I0313 10:40:02.559491 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-jbx9v"] Mar 13 10:40:02.596494 master-0 kubenswrapper[7271]: I0313 10:40:02.596458 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-default-certificate\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.596746 master-0 kubenswrapper[7271]: I0313 10:40:02.596732 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-stats-auth\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.596850 master-0 kubenswrapper[7271]: I0313 10:40:02.596834 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-metrics-certs\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.596946 master-0 kubenswrapper[7271]: I0313 10:40:02.596932 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb778c86-ea51-4eab-82b8-a8e0bec0f050-service-ca-bundle\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.597030 master-0 kubenswrapper[7271]: I0313 10:40:02.597018 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkdfn\" (UniqueName: \"kubernetes.io/projected/eb778c86-ea51-4eab-82b8-a8e0bec0f050-kube-api-access-hkdfn\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.600374 master-0 kubenswrapper[7271]: I0313 10:40:02.597174 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r657p\" (UniqueName: \"kubernetes.io/projected/2195f7be-b41e-4ae2-b737-d5782e0d41a8-kube-api-access-r657p\") pod \"network-check-source-7c67b67d47-jbx9v\" (UID: \"2195f7be-b41e-4ae2-b737-d5782e0d41a8\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-jbx9v" Mar 13 10:40:02.640788 master-0 kubenswrapper[7271]: I0313 10:40:02.640739 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" event={"ID":"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee","Type":"ContainerStarted","Data":"64678ebcb68e6bed917a1b002aba4f9986d59e81a6fdab83010f8da8b3807323"} Mar 13 10:40:02.641041 master-0 kubenswrapper[7271]: I0313 10:40:02.640803 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" event={"ID":"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee","Type":"ContainerStarted","Data":"53042a0659942c843b449fbe1ca84a08d9908f5ff0ea1dcdb8d0de426901fe60"} Mar 13 10:40:02.641041 master-0 kubenswrapper[7271]: I0313 10:40:02.640817 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" event={"ID":"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee","Type":"ContainerStarted","Data":"60b98cd864b31c5d3ce33f7e617eaf280215ab256f27e08a3aa813b955cd4550"} Mar 13 10:40:02.645224 master-0 kubenswrapper[7271]: I0313 10:40:02.645188 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" event={"ID":"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6","Type":"ContainerStarted","Data":"eebcde362311b1142ae265b43696442d83843490a7be39b9eae7e05e3777bfee"} Mar 13 10:40:02.645224 master-0 kubenswrapper[7271]: I0313 10:40:02.645219 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" event={"ID":"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6","Type":"ContainerStarted","Data":"a1bf753439496bde197d1c543409be9bfb058607cd0879d7141d07df38f38943"} Mar 13 10:40:02.645224 master-0 kubenswrapper[7271]: I0313 10:40:02.645231 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" event={"ID":"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6","Type":"ContainerStarted","Data":"326eff89f13ef648d073ebec6b104b4118323f6c130c4cf1f4122764a419957e"} Mar 13 10:40:02.656110 master-0 kubenswrapper[7271]: I0313 10:40:02.656027 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" podStartSLOduration=1.6559977209999999 podStartE2EDuration="1.655997721s" podCreationTimestamp="2026-03-13 10:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:40:02.653137106 +0000 UTC m=+257.179959496" watchObservedRunningTime="2026-03-13 10:40:02.655997721 +0000 UTC m=+257.182820111" Mar 13 10:40:02.667967 master-0 kubenswrapper[7271]: I0313 10:40:02.667893 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" podStartSLOduration=1.667871645 podStartE2EDuration="1.667871645s" podCreationTimestamp="2026-03-13 10:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:40:02.666563411 +0000 UTC m=+257.193385801" watchObservedRunningTime="2026-03-13 10:40:02.667871645 +0000 UTC m=+257.194694035" Mar 13 10:40:02.698395 master-0 kubenswrapper[7271]: I0313 10:40:02.698280 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-metrics-certs\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.698395 master-0 kubenswrapper[7271]: I0313 10:40:02.698380 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb778c86-ea51-4eab-82b8-a8e0bec0f050-service-ca-bundle\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.698717 master-0 kubenswrapper[7271]: I0313 10:40:02.698444 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkdfn\" (UniqueName: \"kubernetes.io/projected/eb778c86-ea51-4eab-82b8-a8e0bec0f050-kube-api-access-hkdfn\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.698717 master-0 kubenswrapper[7271]: I0313 10:40:02.698521 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-vkqtt\" (UID: \"a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt" Mar 13 10:40:02.698717 master-0 kubenswrapper[7271]: I0313 10:40:02.698559 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r657p\" (UniqueName: \"kubernetes.io/projected/2195f7be-b41e-4ae2-b737-d5782e0d41a8-kube-api-access-r657p\") pod \"network-check-source-7c67b67d47-jbx9v\" (UID: \"2195f7be-b41e-4ae2-b737-d5782e0d41a8\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-jbx9v" Mar 13 10:40:02.699844 master-0 kubenswrapper[7271]: I0313 10:40:02.699785 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-default-certificate\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.700697 master-0 kubenswrapper[7271]: I0313 10:40:02.699870 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-stats-auth\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.701301 master-0 kubenswrapper[7271]: I0313 10:40:02.701263 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb778c86-ea51-4eab-82b8-a8e0bec0f050-service-ca-bundle\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.703538 master-0 kubenswrapper[7271]: I0313 10:40:02.703486 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-default-certificate\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.706773 master-0 kubenswrapper[7271]: I0313 10:40:02.703755 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-metrics-certs\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.706773 master-0 kubenswrapper[7271]: I0313 10:40:02.703784 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-stats-auth\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.725442 master-0 kubenswrapper[7271]: I0313 10:40:02.725373 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r657p\" (UniqueName: \"kubernetes.io/projected/2195f7be-b41e-4ae2-b737-d5782e0d41a8-kube-api-access-r657p\") pod \"network-check-source-7c67b67d47-jbx9v\" (UID: \"2195f7be-b41e-4ae2-b737-d5782e0d41a8\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-jbx9v" Mar 13 10:40:02.727303 master-0 kubenswrapper[7271]: I0313 10:40:02.727262 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkdfn\" (UniqueName: \"kubernetes.io/projected/eb778c86-ea51-4eab-82b8-a8e0bec0f050-kube-api-access-hkdfn\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.801638 master-0 kubenswrapper[7271]: I0313 10:40:02.801557 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-vkqtt\" (UID: \"a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt" Mar 13 10:40:02.805263 master-0 kubenswrapper[7271]: I0313 10:40:02.805215 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-vkqtt\" (UID: \"a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt" Mar 13 10:40:02.859333 master-0 kubenswrapper[7271]: I0313 10:40:02.859271 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-jbx9v" Mar 13 10:40:02.879939 master-0 kubenswrapper[7271]: I0313 10:40:02.879864 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:02.901789 master-0 kubenswrapper[7271]: I0313 10:40:02.901742 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt" Mar 13 10:40:03.266920 master-0 kubenswrapper[7271]: I0313 10:40:03.266444 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-jbx9v"] Mar 13 10:40:03.270652 master-0 kubenswrapper[7271]: W0313 10:40:03.270613 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2195f7be_b41e_4ae2_b737_d5782e0d41a8.slice/crio-3d5f1f4095f01f19dc1c943afe4e4b0c9a80883c821d2a0dcacc2ad4ee4f8b25 WatchSource:0}: Error finding container 3d5f1f4095f01f19dc1c943afe4e4b0c9a80883c821d2a0dcacc2ad4ee4f8b25: Status 404 returned error can't find the container with id 3d5f1f4095f01f19dc1c943afe4e4b0c9a80883c821d2a0dcacc2ad4ee4f8b25 Mar 13 10:40:03.324820 master-0 kubenswrapper[7271]: I0313 10:40:03.324707 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jdzpd" podUID="beee81ef-5a3a-4df2-85d5-2573679d261f" containerName="registry-server" probeResult="failure" output=< Mar 13 10:40:03.324820 master-0 kubenswrapper[7271]: timeout: failed to connect service ":50051" within 1s Mar 13 10:40:03.324820 master-0 kubenswrapper[7271]: > Mar 13 10:40:03.346288 master-0 kubenswrapper[7271]: I0313 10:40:03.346222 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt"] Mar 13 10:40:03.356290 master-0 kubenswrapper[7271]: W0313 10:40:03.356207 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4e40b43_5a7d_4865_bd3c_ca5911bf3ee3.slice/crio-f4e5ae9525fe60361341f421bac6c1976c0d8c217394b9ae9ea8bc8043db8345 WatchSource:0}: Error finding container f4e5ae9525fe60361341f421bac6c1976c0d8c217394b9ae9ea8bc8043db8345: Status 404 returned error can't find the container with id f4e5ae9525fe60361341f421bac6c1976c0d8c217394b9ae9ea8bc8043db8345 Mar 13 10:40:03.546215 master-0 kubenswrapper[7271]: I0313 10:40:03.546172 7271 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 10:40:03.656757 master-0 kubenswrapper[7271]: I0313 10:40:03.656628 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="253196c1-ea2c-4382-b0fc-c56a8d919b9a" path="/var/lib/kubelet/pods/253196c1-ea2c-4382-b0fc-c56a8d919b9a/volumes" Mar 13 10:40:03.659535 master-0 kubenswrapper[7271]: I0313 10:40:03.659324 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" event={"ID":"eb778c86-ea51-4eab-82b8-a8e0bec0f050","Type":"ContainerStarted","Data":"76246e9a1d2379cb0958975bb664cf21b612b44d022ee860fbd36d45bdea98e3"} Mar 13 10:40:03.662915 master-0 kubenswrapper[7271]: I0313 10:40:03.662867 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt" event={"ID":"a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3","Type":"ContainerStarted","Data":"f4e5ae9525fe60361341f421bac6c1976c0d8c217394b9ae9ea8bc8043db8345"} Mar 13 10:40:03.665427 master-0 kubenswrapper[7271]: I0313 10:40:03.665388 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-jbx9v" event={"ID":"2195f7be-b41e-4ae2-b737-d5782e0d41a8","Type":"ContainerStarted","Data":"e50e2da89a6bcc90ca9341efd63ffcba3b5fe3774af7ca4392f6d53f89c638cd"} Mar 13 10:40:03.665529 master-0 kubenswrapper[7271]: I0313 10:40:03.665460 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-jbx9v" event={"ID":"2195f7be-b41e-4ae2-b737-d5782e0d41a8","Type":"ContainerStarted","Data":"3d5f1f4095f01f19dc1c943afe4e4b0c9a80883c821d2a0dcacc2ad4ee4f8b25"} Mar 13 10:40:03.682648 master-0 kubenswrapper[7271]: I0313 10:40:03.682379 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-jbx9v" podStartSLOduration=315.682353083 podStartE2EDuration="5m15.682353083s" podCreationTimestamp="2026-03-13 10:34:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:40:03.680503994 +0000 UTC m=+258.207326384" watchObservedRunningTime="2026-03-13 10:40:03.682353083 +0000 UTC m=+258.209175473" Mar 13 10:40:05.971049 master-0 kubenswrapper[7271]: I0313 10:40:05.970922 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-mhk8z"] Mar 13 10:40:05.971934 master-0 kubenswrapper[7271]: I0313 10:40:05.971905 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:40:05.974028 master-0 kubenswrapper[7271]: I0313 10:40:05.973989 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 10:40:05.974557 master-0 kubenswrapper[7271]: I0313 10:40:05.974523 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 10:40:05.975345 master-0 kubenswrapper[7271]: I0313 10:40:05.975302 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-5p4h2" Mar 13 10:40:06.059487 master-0 kubenswrapper[7271]: I0313 10:40:06.059401 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-node-bootstrap-token\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:40:06.059487 master-0 kubenswrapper[7271]: I0313 10:40:06.059478 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-certs\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:40:06.059814 master-0 kubenswrapper[7271]: I0313 10:40:06.059660 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k4c5\" (UniqueName: \"kubernetes.io/projected/4df756f0-c6b6-4730-842a-7ee9227397ae-kube-api-access-8k4c5\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:40:06.160627 master-0 kubenswrapper[7271]: I0313 10:40:06.160454 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-node-bootstrap-token\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:40:06.160627 master-0 kubenswrapper[7271]: I0313 10:40:06.160530 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-certs\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:40:06.160627 master-0 kubenswrapper[7271]: I0313 10:40:06.160613 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k4c5\" (UniqueName: \"kubernetes.io/projected/4df756f0-c6b6-4730-842a-7ee9227397ae-kube-api-access-8k4c5\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:40:06.164307 master-0 kubenswrapper[7271]: I0313 10:40:06.164263 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-node-bootstrap-token\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:40:06.165815 master-0 kubenswrapper[7271]: I0313 10:40:06.165760 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-certs\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:40:06.185406 master-0 kubenswrapper[7271]: I0313 10:40:06.185328 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k4c5\" (UniqueName: \"kubernetes.io/projected/4df756f0-c6b6-4730-842a-7ee9227397ae-kube-api-access-8k4c5\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:40:06.292554 master-0 kubenswrapper[7271]: I0313 10:40:06.292458 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:40:06.314846 master-0 kubenswrapper[7271]: W0313 10:40:06.314790 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4df756f0_c6b6_4730_842a_7ee9227397ae.slice/crio-29733a7ea73e3174735d72b2210dd940a71dd3f008e394e00385294d9ba36ee3 WatchSource:0}: Error finding container 29733a7ea73e3174735d72b2210dd940a71dd3f008e394e00385294d9ba36ee3: Status 404 returned error can't find the container with id 29733a7ea73e3174735d72b2210dd940a71dd3f008e394e00385294d9ba36ee3 Mar 13 10:40:06.687894 master-0 kubenswrapper[7271]: I0313 10:40:06.687740 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" event={"ID":"eb778c86-ea51-4eab-82b8-a8e0bec0f050","Type":"ContainerStarted","Data":"c38e1852651e9aa29b7c4aa782bd48bf04b7ff3ecd204555f9421edc8fb3fef6"} Mar 13 10:40:06.689485 master-0 kubenswrapper[7271]: I0313 10:40:06.689438 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mhk8z" event={"ID":"4df756f0-c6b6-4730-842a-7ee9227397ae","Type":"ContainerStarted","Data":"251a767cf8e31a06dce2e71b8e729c17932e3061c40f02bc92bcc34ee03eec89"} Mar 13 10:40:06.689542 master-0 kubenswrapper[7271]: I0313 10:40:06.689491 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mhk8z" event={"ID":"4df756f0-c6b6-4730-842a-7ee9227397ae","Type":"ContainerStarted","Data":"29733a7ea73e3174735d72b2210dd940a71dd3f008e394e00385294d9ba36ee3"} Mar 13 10:40:06.691088 master-0 kubenswrapper[7271]: I0313 10:40:06.691022 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt" event={"ID":"a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3","Type":"ContainerStarted","Data":"17fdd0d51f2307a4f258394b0684d28d1aa12c69002ecabdc278eae099d710fa"} Mar 13 10:40:06.691298 master-0 kubenswrapper[7271]: I0313 10:40:06.691270 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt" Mar 13 10:40:06.696555 master-0 kubenswrapper[7271]: I0313 10:40:06.696508 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt" Mar 13 10:40:06.872644 master-0 kubenswrapper[7271]: I0313 10:40:06.872539 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podStartSLOduration=67.220594211 podStartE2EDuration="1m9.872513865s" podCreationTimestamp="2026-03-13 10:38:57 +0000 UTC" firstStartedPulling="2026-03-13 10:40:02.912521837 +0000 UTC m=+257.439344227" lastFinishedPulling="2026-03-13 10:40:05.564441491 +0000 UTC m=+260.091263881" observedRunningTime="2026-03-13 10:40:06.866870656 +0000 UTC m=+261.393693066" watchObservedRunningTime="2026-03-13 10:40:06.872513865 +0000 UTC m=+261.399336265" Mar 13 10:40:06.880777 master-0 kubenswrapper[7271]: I0313 10:40:06.880730 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:06.883513 master-0 kubenswrapper[7271]: I0313 10:40:06.883480 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:06.883513 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:06.883513 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:06.883513 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:06.883708 master-0 kubenswrapper[7271]: I0313 10:40:06.883544 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:06.887347 master-0 kubenswrapper[7271]: I0313 10:40:06.887293 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt" podStartSLOduration=67.686168068 podStartE2EDuration="1m9.887274866s" podCreationTimestamp="2026-03-13 10:38:57 +0000 UTC" firstStartedPulling="2026-03-13 10:40:03.363333633 +0000 UTC m=+257.890156023" lastFinishedPulling="2026-03-13 10:40:05.564440431 +0000 UTC m=+260.091262821" observedRunningTime="2026-03-13 10:40:06.884714658 +0000 UTC m=+261.411537048" watchObservedRunningTime="2026-03-13 10:40:06.887274866 +0000 UTC m=+261.414097246" Mar 13 10:40:06.902248 master-0 kubenswrapper[7271]: I0313 10:40:06.902065 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-mhk8z" podStartSLOduration=1.902047016 podStartE2EDuration="1.902047016s" podCreationTimestamp="2026-03-13 10:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:40:06.898271216 +0000 UTC m=+261.425093606" watchObservedRunningTime="2026-03-13 10:40:06.902047016 +0000 UTC m=+261.428869406" Mar 13 10:40:07.298697 master-0 kubenswrapper[7271]: I0313 10:40:07.298628 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7"] Mar 13 10:40:07.299277 master-0 kubenswrapper[7271]: I0313 10:40:07.298966 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" podUID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerName="cluster-cloud-controller-manager" containerID="cri-o://9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32" gracePeriod=30 Mar 13 10:40:07.299277 master-0 kubenswrapper[7271]: I0313 10:40:07.299078 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" podUID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerName="kube-rbac-proxy" containerID="cri-o://f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43" gracePeriod=30 Mar 13 10:40:07.299277 master-0 kubenswrapper[7271]: I0313 10:40:07.299124 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" podUID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerName="config-sync-controllers" containerID="cri-o://100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7" gracePeriod=30 Mar 13 10:40:07.461072 master-0 kubenswrapper[7271]: I0313 10:40:07.460986 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:40:07.581183 master-0 kubenswrapper[7271]: I0313 10:40:07.580991 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f64995-fa1e-4205-981b-be7a7ae67115-cloud-controller-manager-operator-tls\") pod \"e9f64995-fa1e-4205-981b-be7a7ae67115\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " Mar 13 10:40:07.581183 master-0 kubenswrapper[7271]: I0313 10:40:07.581086 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e9f64995-fa1e-4205-981b-be7a7ae67115-auth-proxy-config\") pod \"e9f64995-fa1e-4205-981b-be7a7ae67115\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " Mar 13 10:40:07.581484 master-0 kubenswrapper[7271]: I0313 10:40:07.581272 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9f64995-fa1e-4205-981b-be7a7ae67115-host-etc-kube\") pod \"e9f64995-fa1e-4205-981b-be7a7ae67115\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " Mar 13 10:40:07.581484 master-0 kubenswrapper[7271]: I0313 10:40:07.581309 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpwh9\" (UniqueName: \"kubernetes.io/projected/e9f64995-fa1e-4205-981b-be7a7ae67115-kube-api-access-vpwh9\") pod \"e9f64995-fa1e-4205-981b-be7a7ae67115\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " Mar 13 10:40:07.581484 master-0 kubenswrapper[7271]: I0313 10:40:07.581352 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f64995-fa1e-4205-981b-be7a7ae67115-images\") pod \"e9f64995-fa1e-4205-981b-be7a7ae67115\" (UID: \"e9f64995-fa1e-4205-981b-be7a7ae67115\") " Mar 13 10:40:07.581656 master-0 kubenswrapper[7271]: I0313 10:40:07.581632 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9f64995-fa1e-4205-981b-be7a7ae67115-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "e9f64995-fa1e-4205-981b-be7a7ae67115" (UID: "e9f64995-fa1e-4205-981b-be7a7ae67115"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:40:07.581928 master-0 kubenswrapper[7271]: I0313 10:40:07.581789 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9f64995-fa1e-4205-981b-be7a7ae67115-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "e9f64995-fa1e-4205-981b-be7a7ae67115" (UID: "e9f64995-fa1e-4205-981b-be7a7ae67115"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:40:07.582024 master-0 kubenswrapper[7271]: I0313 10:40:07.581997 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9f64995-fa1e-4205-981b-be7a7ae67115-images" (OuterVolumeSpecName: "images") pod "e9f64995-fa1e-4205-981b-be7a7ae67115" (UID: "e9f64995-fa1e-4205-981b-be7a7ae67115"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:40:07.584996 master-0 kubenswrapper[7271]: I0313 10:40:07.584945 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9f64995-fa1e-4205-981b-be7a7ae67115-kube-api-access-vpwh9" (OuterVolumeSpecName: "kube-api-access-vpwh9") pod "e9f64995-fa1e-4205-981b-be7a7ae67115" (UID: "e9f64995-fa1e-4205-981b-be7a7ae67115"). InnerVolumeSpecName "kube-api-access-vpwh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:40:07.585733 master-0 kubenswrapper[7271]: I0313 10:40:07.585678 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f64995-fa1e-4205-981b-be7a7ae67115-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "e9f64995-fa1e-4205-981b-be7a7ae67115" (UID: "e9f64995-fa1e-4205-981b-be7a7ae67115"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:40:07.653219 master-0 kubenswrapper[7271]: I0313 10:40:07.653159 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp"] Mar 13 10:40:07.653491 master-0 kubenswrapper[7271]: E0313 10:40:07.653399 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerName="config-sync-controllers" Mar 13 10:40:07.653491 master-0 kubenswrapper[7271]: I0313 10:40:07.653419 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerName="config-sync-controllers" Mar 13 10:40:07.653491 master-0 kubenswrapper[7271]: E0313 10:40:07.653434 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerName="kube-rbac-proxy" Mar 13 10:40:07.653491 master-0 kubenswrapper[7271]: I0313 10:40:07.653441 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerName="kube-rbac-proxy" Mar 13 10:40:07.653491 master-0 kubenswrapper[7271]: E0313 10:40:07.653457 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerName="cluster-cloud-controller-manager" Mar 13 10:40:07.653491 master-0 kubenswrapper[7271]: I0313 10:40:07.653465 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerName="cluster-cloud-controller-manager" Mar 13 10:40:07.653785 master-0 kubenswrapper[7271]: I0313 10:40:07.653579 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerName="cluster-cloud-controller-manager" Mar 13 10:40:07.653785 master-0 kubenswrapper[7271]: I0313 10:40:07.653610 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerName="kube-rbac-proxy" Mar 13 10:40:07.653785 master-0 kubenswrapper[7271]: I0313 10:40:07.653637 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerName="config-sync-controllers" Mar 13 10:40:07.654350 master-0 kubenswrapper[7271]: I0313 10:40:07.654327 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:40:07.657651 master-0 kubenswrapper[7271]: I0313 10:40:07.657578 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 13 10:40:07.658068 master-0 kubenswrapper[7271]: I0313 10:40:07.658037 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 13 10:40:07.658569 master-0 kubenswrapper[7271]: I0313 10:40:07.658224 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-45xkz" Mar 13 10:40:07.658569 master-0 kubenswrapper[7271]: I0313 10:40:07.658391 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 13 10:40:07.665805 master-0 kubenswrapper[7271]: I0313 10:40:07.665722 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp"] Mar 13 10:40:07.691686 master-0 kubenswrapper[7271]: I0313 10:40:07.683324 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpwh9\" (UniqueName: \"kubernetes.io/projected/e9f64995-fa1e-4205-981b-be7a7ae67115-kube-api-access-vpwh9\") on node \"master-0\" DevicePath \"\"" Mar 13 10:40:07.691686 master-0 kubenswrapper[7271]: I0313 10:40:07.683365 7271 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f64995-fa1e-4205-981b-be7a7ae67115-images\") on node \"master-0\" DevicePath \"\"" Mar 13 10:40:07.691686 master-0 kubenswrapper[7271]: I0313 10:40:07.683378 7271 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f64995-fa1e-4205-981b-be7a7ae67115-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 10:40:07.691686 master-0 kubenswrapper[7271]: I0313 10:40:07.683390 7271 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e9f64995-fa1e-4205-981b-be7a7ae67115-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:40:07.691686 master-0 kubenswrapper[7271]: I0313 10:40:07.683404 7271 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9f64995-fa1e-4205-981b-be7a7ae67115-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 13 10:40:07.702457 master-0 kubenswrapper[7271]: I0313 10:40:07.702407 7271 generic.go:334] "Generic (PLEG): container finished" podID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerID="f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43" exitCode=0 Mar 13 10:40:07.702701 master-0 kubenswrapper[7271]: I0313 10:40:07.702679 7271 generic.go:334] "Generic (PLEG): container finished" podID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerID="100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7" exitCode=0 Mar 13 10:40:07.702795 master-0 kubenswrapper[7271]: I0313 10:40:07.702780 7271 generic.go:334] "Generic (PLEG): container finished" podID="e9f64995-fa1e-4205-981b-be7a7ae67115" containerID="9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32" exitCode=0 Mar 13 10:40:07.703775 master-0 kubenswrapper[7271]: I0313 10:40:07.703754 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" event={"ID":"e9f64995-fa1e-4205-981b-be7a7ae67115","Type":"ContainerDied","Data":"f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43"} Mar 13 10:40:07.703901 master-0 kubenswrapper[7271]: I0313 10:40:07.703883 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" event={"ID":"e9f64995-fa1e-4205-981b-be7a7ae67115","Type":"ContainerDied","Data":"100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7"} Mar 13 10:40:07.703990 master-0 kubenswrapper[7271]: I0313 10:40:07.703974 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" event={"ID":"e9f64995-fa1e-4205-981b-be7a7ae67115","Type":"ContainerDied","Data":"9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32"} Mar 13 10:40:07.704116 master-0 kubenswrapper[7271]: I0313 10:40:07.704099 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" event={"ID":"e9f64995-fa1e-4205-981b-be7a7ae67115","Type":"ContainerDied","Data":"f034de0da3c46b265fa5f09cbe0e11e48a199aeaabc3bd2f6dde6e2b9b1b9898"} Mar 13 10:40:07.704214 master-0 kubenswrapper[7271]: I0313 10:40:07.704198 7271 scope.go:117] "RemoveContainer" containerID="f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43" Mar 13 10:40:07.704409 master-0 kubenswrapper[7271]: I0313 10:40:07.703759 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7" Mar 13 10:40:07.734448 master-0 kubenswrapper[7271]: I0313 10:40:07.734279 7271 scope.go:117] "RemoveContainer" containerID="100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7" Mar 13 10:40:07.742490 master-0 kubenswrapper[7271]: I0313 10:40:07.742437 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7"] Mar 13 10:40:07.744164 master-0 kubenswrapper[7271]: I0313 10:40:07.744101 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-r8ck7"] Mar 13 10:40:07.767608 master-0 kubenswrapper[7271]: I0313 10:40:07.766820 7271 scope.go:117] "RemoveContainer" containerID="9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32" Mar 13 10:40:07.784221 master-0 kubenswrapper[7271]: I0313 10:40:07.784166 7271 scope.go:117] "RemoveContainer" containerID="f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43" Mar 13 10:40:07.784975 master-0 kubenswrapper[7271]: I0313 10:40:07.784930 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:40:07.785060 master-0 kubenswrapper[7271]: I0313 10:40:07.784984 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzxzs\" (UniqueName: \"kubernetes.io/projected/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-kube-api-access-dzxzs\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:40:07.785060 master-0 kubenswrapper[7271]: I0313 10:40:07.785033 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:40:07.785299 master-0 kubenswrapper[7271]: I0313 10:40:07.785262 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:40:07.785299 master-0 kubenswrapper[7271]: E0313 10:40:07.784947 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43\": container with ID starting with f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43 not found: ID does not exist" containerID="f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43" Mar 13 10:40:07.785395 master-0 kubenswrapper[7271]: I0313 10:40:07.785318 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43"} err="failed to get container status \"f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43\": rpc error: code = NotFound desc = could not find container \"f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43\": container with ID starting with f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43 not found: ID does not exist" Mar 13 10:40:07.785395 master-0 kubenswrapper[7271]: I0313 10:40:07.785343 7271 scope.go:117] "RemoveContainer" containerID="100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7" Mar 13 10:40:07.785842 master-0 kubenswrapper[7271]: E0313 10:40:07.785816 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7\": container with ID starting with 100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7 not found: ID does not exist" containerID="100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7" Mar 13 10:40:07.785977 master-0 kubenswrapper[7271]: I0313 10:40:07.785954 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7"} err="failed to get container status \"100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7\": rpc error: code = NotFound desc = could not find container \"100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7\": container with ID starting with 100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7 not found: ID does not exist" Mar 13 10:40:07.786059 master-0 kubenswrapper[7271]: I0313 10:40:07.786046 7271 scope.go:117] "RemoveContainer" containerID="9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32" Mar 13 10:40:07.789822 master-0 kubenswrapper[7271]: E0313 10:40:07.789753 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32\": container with ID starting with 9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32 not found: ID does not exist" containerID="9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32" Mar 13 10:40:07.790163 master-0 kubenswrapper[7271]: I0313 10:40:07.790124 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32"} err="failed to get container status \"9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32\": rpc error: code = NotFound desc = could not find container \"9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32\": container with ID starting with 9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32 not found: ID does not exist" Mar 13 10:40:07.790267 master-0 kubenswrapper[7271]: I0313 10:40:07.790251 7271 scope.go:117] "RemoveContainer" containerID="f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43" Mar 13 10:40:07.790970 master-0 kubenswrapper[7271]: I0313 10:40:07.790871 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43"} err="failed to get container status \"f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43\": rpc error: code = NotFound desc = could not find container \"f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43\": container with ID starting with f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43 not found: ID does not exist" Mar 13 10:40:07.791109 master-0 kubenswrapper[7271]: I0313 10:40:07.791092 7271 scope.go:117] "RemoveContainer" containerID="100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7" Mar 13 10:40:07.791493 master-0 kubenswrapper[7271]: I0313 10:40:07.791402 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7"} err="failed to get container status \"100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7\": rpc error: code = NotFound desc = could not find container \"100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7\": container with ID starting with 100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7 not found: ID does not exist" Mar 13 10:40:07.791623 master-0 kubenswrapper[7271]: I0313 10:40:07.791605 7271 scope.go:117] "RemoveContainer" containerID="9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32" Mar 13 10:40:07.791951 master-0 kubenswrapper[7271]: I0313 10:40:07.791927 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32"} err="failed to get container status \"9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32\": rpc error: code = NotFound desc = could not find container \"9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32\": container with ID starting with 9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32 not found: ID does not exist" Mar 13 10:40:07.792045 master-0 kubenswrapper[7271]: I0313 10:40:07.792028 7271 scope.go:117] "RemoveContainer" containerID="f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43" Mar 13 10:40:07.792365 master-0 kubenswrapper[7271]: I0313 10:40:07.792344 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43"} err="failed to get container status \"f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43\": rpc error: code = NotFound desc = could not find container \"f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43\": container with ID starting with f81f14d4e508ef7bb256b9171576d4f57c00af38311bc455e2ccda7164e68c43 not found: ID does not exist" Mar 13 10:40:07.792453 master-0 kubenswrapper[7271]: I0313 10:40:07.792439 7271 scope.go:117] "RemoveContainer" containerID="100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7" Mar 13 10:40:07.792916 master-0 kubenswrapper[7271]: I0313 10:40:07.792860 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7"} err="failed to get container status \"100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7\": rpc error: code = NotFound desc = could not find container \"100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7\": container with ID starting with 100dfc2be2e7846e0324dc10f52c0efd8b4389ad704a75ee7c820a85797892d7 not found: ID does not exist" Mar 13 10:40:07.792994 master-0 kubenswrapper[7271]: I0313 10:40:07.792921 7271 scope.go:117] "RemoveContainer" containerID="9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32" Mar 13 10:40:07.793690 master-0 kubenswrapper[7271]: I0313 10:40:07.793655 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32"} err="failed to get container status \"9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32\": rpc error: code = NotFound desc = could not find container \"9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32\": container with ID starting with 9ece2d430c9cce48e1f930f8ad614c56dda2379c76f4e983342353f18c6e0d32 not found: ID does not exist" Mar 13 10:40:07.811413 master-0 kubenswrapper[7271]: I0313 10:40:07.811329 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x"] Mar 13 10:40:07.813477 master-0 kubenswrapper[7271]: I0313 10:40:07.813441 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:07.817282 master-0 kubenswrapper[7271]: I0313 10:40:07.816975 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 10:40:07.817282 master-0 kubenswrapper[7271]: I0313 10:40:07.817030 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-7mc4m" Mar 13 10:40:07.817282 master-0 kubenswrapper[7271]: I0313 10:40:07.817062 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 10:40:07.817282 master-0 kubenswrapper[7271]: I0313 10:40:07.817096 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 10:40:07.817282 master-0 kubenswrapper[7271]: I0313 10:40:07.817177 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 10:40:07.817530 master-0 kubenswrapper[7271]: I0313 10:40:07.817516 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 10:40:07.883488 master-0 kubenswrapper[7271]: I0313 10:40:07.883429 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:07.883488 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:07.883488 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:07.883488 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:07.883488 master-0 kubenswrapper[7271]: I0313 10:40:07.883497 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:07.886527 master-0 kubenswrapper[7271]: I0313 10:40:07.886115 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:40:07.886527 master-0 kubenswrapper[7271]: I0313 10:40:07.886177 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:07.886527 master-0 kubenswrapper[7271]: I0313 10:40:07.886222 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:40:07.886527 master-0 kubenswrapper[7271]: I0313 10:40:07.886242 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzxzs\" (UniqueName: \"kubernetes.io/projected/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-kube-api-access-dzxzs\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:40:07.886527 master-0 kubenswrapper[7271]: I0313 10:40:07.886270 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:40:07.886527 master-0 kubenswrapper[7271]: I0313 10:40:07.886295 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:07.886527 master-0 kubenswrapper[7271]: I0313 10:40:07.886319 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:07.886527 master-0 kubenswrapper[7271]: I0313 10:40:07.886335 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:07.886527 master-0 kubenswrapper[7271]: I0313 10:40:07.886374 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j25nl\" (UniqueName: \"kubernetes.io/projected/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-kube-api-access-j25nl\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:07.889373 master-0 kubenswrapper[7271]: I0313 10:40:07.888046 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:40:07.890394 master-0 kubenswrapper[7271]: I0313 10:40:07.890343 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:40:07.892278 master-0 kubenswrapper[7271]: I0313 10:40:07.892203 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:40:07.903412 master-0 kubenswrapper[7271]: I0313 10:40:07.903342 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzxzs\" (UniqueName: \"kubernetes.io/projected/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-kube-api-access-dzxzs\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:40:07.977698 master-0 kubenswrapper[7271]: I0313 10:40:07.977616 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:40:07.987807 master-0 kubenswrapper[7271]: I0313 10:40:07.987518 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j25nl\" (UniqueName: \"kubernetes.io/projected/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-kube-api-access-j25nl\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:07.987807 master-0 kubenswrapper[7271]: I0313 10:40:07.987604 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:07.987807 master-0 kubenswrapper[7271]: I0313 10:40:07.987652 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:07.987807 master-0 kubenswrapper[7271]: I0313 10:40:07.987697 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:07.987807 master-0 kubenswrapper[7271]: I0313 10:40:07.987719 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:07.987807 master-0 kubenswrapper[7271]: I0313 10:40:07.987829 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:07.989297 master-0 kubenswrapper[7271]: I0313 10:40:07.988957 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:07.989297 master-0 kubenswrapper[7271]: I0313 10:40:07.989259 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:07.995221 master-0 kubenswrapper[7271]: I0313 10:40:07.995149 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:08.007005 master-0 kubenswrapper[7271]: I0313 10:40:08.006947 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j25nl\" (UniqueName: \"kubernetes.io/projected/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-kube-api-access-j25nl\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:08.141230 master-0 kubenswrapper[7271]: I0313 10:40:08.140714 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:40:08.399934 master-0 kubenswrapper[7271]: I0313 10:40:08.399561 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp"] Mar 13 10:40:08.401615 master-0 kubenswrapper[7271]: W0313 10:40:08.401328 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d8af021_f20f_48a2_8b2a_3a5a3f37237f.slice/crio-5e13cffe1976b1fe526e31bded64fe9c448e434d19da41cefd16fb763080f8bc WatchSource:0}: Error finding container 5e13cffe1976b1fe526e31bded64fe9c448e434d19da41cefd16fb763080f8bc: Status 404 returned error can't find the container with id 5e13cffe1976b1fe526e31bded64fe9c448e434d19da41cefd16fb763080f8bc Mar 13 10:40:08.723078 master-0 kubenswrapper[7271]: I0313 10:40:08.723004 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" event={"ID":"9d8af021-f20f-48a2-8b2a-3a5a3f37237f","Type":"ContainerStarted","Data":"5e13cffe1976b1fe526e31bded64fe9c448e434d19da41cefd16fb763080f8bc"} Mar 13 10:40:08.729009 master-0 kubenswrapper[7271]: I0313 10:40:08.728965 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" event={"ID":"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c","Type":"ContainerStarted","Data":"9c62b3c2fdc62403c70efa03c341af1e11c584005c0854a7b9ae04a0957b3988"} Mar 13 10:40:08.729009 master-0 kubenswrapper[7271]: I0313 10:40:08.729009 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" event={"ID":"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c","Type":"ContainerStarted","Data":"f5630038dc1bb4e46b0c3343da5e699daf5fd3e0af484ddecd21f624462048e4"} Mar 13 10:40:08.729198 master-0 kubenswrapper[7271]: I0313 10:40:08.729022 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" event={"ID":"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c","Type":"ContainerStarted","Data":"4d051c8ad32b7669f426e6d80e6632cee3e398cb08f827d5c2ff51c92ed352a3"} Mar 13 10:40:08.886675 master-0 kubenswrapper[7271]: I0313 10:40:08.885800 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:08.886675 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:08.886675 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:08.886675 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:08.886675 master-0 kubenswrapper[7271]: I0313 10:40:08.885945 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:09.654213 master-0 kubenswrapper[7271]: I0313 10:40:09.654163 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f64995-fa1e-4205-981b-be7a7ae67115" path="/var/lib/kubelet/pods/e9f64995-fa1e-4205-981b-be7a7ae67115/volumes" Mar 13 10:40:09.740192 master-0 kubenswrapper[7271]: I0313 10:40:09.740122 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" event={"ID":"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c","Type":"ContainerStarted","Data":"3bbbe7499776e3df9ed67686c8b0e37d144c2f87c51259e545536f5fcc588fda"} Mar 13 10:40:09.769089 master-0 kubenswrapper[7271]: I0313 10:40:09.769003 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" podStartSLOduration=2.7689857780000002 podStartE2EDuration="2.768985778s" podCreationTimestamp="2026-03-13 10:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:40:09.7667909 +0000 UTC m=+264.293613300" watchObservedRunningTime="2026-03-13 10:40:09.768985778 +0000 UTC m=+264.295808168" Mar 13 10:40:09.882803 master-0 kubenswrapper[7271]: I0313 10:40:09.882736 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:09.882803 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:09.882803 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:09.882803 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:09.882803 master-0 kubenswrapper[7271]: I0313 10:40:09.882798 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:10.512931 master-0 kubenswrapper[7271]: I0313 10:40:10.508048 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:40:10.512931 master-0 kubenswrapper[7271]: I0313 10:40:10.512286 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:40:10.541331 master-0 kubenswrapper[7271]: I0313 10:40:10.540934 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:40:10.558209 master-0 kubenswrapper[7271]: I0313 10:40:10.553192 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:40:10.565532 master-0 kubenswrapper[7271]: I0313 10:40:10.565450 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:40:10.593467 master-0 kubenswrapper[7271]: I0313 10:40:10.593400 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:40:10.747416 master-0 kubenswrapper[7271]: I0313 10:40:10.747332 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" event={"ID":"9d8af021-f20f-48a2-8b2a-3a5a3f37237f","Type":"ContainerStarted","Data":"2537f236ef8ef1362cc7e19a55185296f7210acf499d9ae309ac029f7c087010"} Mar 13 10:40:10.747416 master-0 kubenswrapper[7271]: I0313 10:40:10.747391 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" event={"ID":"9d8af021-f20f-48a2-8b2a-3a5a3f37237f","Type":"ContainerStarted","Data":"0af2b43f159a02ff83ea821a5a5813bac5e0284f929e0a4545c1e4a0caed6f79"} Mar 13 10:40:10.836763 master-0 kubenswrapper[7271]: I0313 10:40:10.836452 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" podStartSLOduration=2.184778203 podStartE2EDuration="3.836413936s" podCreationTimestamp="2026-03-13 10:40:07 +0000 UTC" firstStartedPulling="2026-03-13 10:40:08.403257499 +0000 UTC m=+262.930079919" lastFinishedPulling="2026-03-13 10:40:10.054893272 +0000 UTC m=+264.581715652" observedRunningTime="2026-03-13 10:40:10.836071837 +0000 UTC m=+265.362894227" watchObservedRunningTime="2026-03-13 10:40:10.836413936 +0000 UTC m=+265.363236326" Mar 13 10:40:10.882700 master-0 kubenswrapper[7271]: I0313 10:40:10.882639 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:10.882700 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:10.882700 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:10.882700 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:10.882988 master-0 kubenswrapper[7271]: I0313 10:40:10.882714 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:11.882102 master-0 kubenswrapper[7271]: I0313 10:40:11.882037 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:11.882102 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:11.882102 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:11.882102 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:11.883666 master-0 kubenswrapper[7271]: I0313 10:40:11.882110 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:12.300749 master-0 kubenswrapper[7271]: I0313 10:40:12.300666 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:40:12.335321 master-0 kubenswrapper[7271]: I0313 10:40:12.335258 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:40:12.880951 master-0 kubenswrapper[7271]: I0313 10:40:12.880879 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:40:12.883233 master-0 kubenswrapper[7271]: I0313 10:40:12.883177 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:12.883233 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:12.883233 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:12.883233 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:12.883709 master-0 kubenswrapper[7271]: I0313 10:40:12.883269 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:13.883107 master-0 kubenswrapper[7271]: I0313 10:40:13.883037 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:13.883107 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:13.883107 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:13.883107 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:13.883731 master-0 kubenswrapper[7271]: I0313 10:40:13.883115 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:14.883193 master-0 kubenswrapper[7271]: I0313 10:40:14.883128 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:14.883193 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:14.883193 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:14.883193 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:14.883885 master-0 kubenswrapper[7271]: I0313 10:40:14.883201 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:15.882849 master-0 kubenswrapper[7271]: I0313 10:40:15.882746 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:15.882849 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:15.882849 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:15.882849 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:15.882849 master-0 kubenswrapper[7271]: I0313 10:40:15.882849 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:16.883260 master-0 kubenswrapper[7271]: I0313 10:40:16.883204 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:16.883260 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:16.883260 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:16.883260 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:16.884039 master-0 kubenswrapper[7271]: I0313 10:40:16.883995 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:17.883034 master-0 kubenswrapper[7271]: I0313 10:40:17.882937 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:17.883034 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:17.883034 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:17.883034 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:17.883758 master-0 kubenswrapper[7271]: I0313 10:40:17.883039 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:18.882712 master-0 kubenswrapper[7271]: I0313 10:40:18.882647 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:18.882712 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:18.882712 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:18.882712 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:18.883004 master-0 kubenswrapper[7271]: I0313 10:40:18.882734 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:19.883109 master-0 kubenswrapper[7271]: I0313 10:40:19.883025 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:19.883109 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:19.883109 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:19.883109 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:19.883109 master-0 kubenswrapper[7271]: I0313 10:40:19.883110 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:20.630213 master-0 kubenswrapper[7271]: I0313 10:40:20.630151 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm"] Mar 13 10:40:20.631786 master-0 kubenswrapper[7271]: I0313 10:40:20.631767 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:40:20.636246 master-0 kubenswrapper[7271]: I0313 10:40:20.636202 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 13 10:40:20.636570 master-0 kubenswrapper[7271]: I0313 10:40:20.636256 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-4xgf9" Mar 13 10:40:20.636649 master-0 kubenswrapper[7271]: I0313 10:40:20.636321 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 13 10:40:20.646175 master-0 kubenswrapper[7271]: I0313 10:40:20.646108 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:40:20.646410 master-0 kubenswrapper[7271]: I0313 10:40:20.646208 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:40:20.646410 master-0 kubenswrapper[7271]: I0313 10:40:20.646236 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8kvd\" (UniqueName: \"kubernetes.io/projected/5448b59a-b731-45a3-9ded-d25315f597fb-kube-api-access-d8kvd\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:40:20.646410 master-0 kubenswrapper[7271]: I0313 10:40:20.646264 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5448b59a-b731-45a3-9ded-d25315f597fb-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:40:20.646511 master-0 kubenswrapper[7271]: I0313 10:40:20.646435 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm"] Mar 13 10:40:20.718656 master-0 kubenswrapper[7271]: I0313 10:40:20.716678 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-mtcsw"] Mar 13 10:40:20.718656 master-0 kubenswrapper[7271]: I0313 10:40:20.718268 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.725609 master-0 kubenswrapper[7271]: I0313 10:40:20.724113 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9b2d2" Mar 13 10:40:20.725609 master-0 kubenswrapper[7271]: I0313 10:40:20.724385 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 13 10:40:20.725609 master-0 kubenswrapper[7271]: I0313 10:40:20.724535 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 13 10:40:20.730602 master-0 kubenswrapper[7271]: I0313 10:40:20.726807 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn"] Mar 13 10:40:20.730602 master-0 kubenswrapper[7271]: I0313 10:40:20.729784 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.732272 master-0 kubenswrapper[7271]: I0313 10:40:20.732236 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 13 10:40:20.732463 master-0 kubenswrapper[7271]: I0313 10:40:20.732437 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 13 10:40:20.732537 master-0 kubenswrapper[7271]: I0313 10:40:20.732511 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 13 10:40:20.732653 master-0 kubenswrapper[7271]: I0313 10:40:20.732622 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-ncbcz" Mar 13 10:40:20.745634 master-0 kubenswrapper[7271]: I0313 10:40:20.743796 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn"] Mar 13 10:40:20.747614 master-0 kubenswrapper[7271]: I0313 10:40:20.747565 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8kvd\" (UniqueName: \"kubernetes.io/projected/5448b59a-b731-45a3-9ded-d25315f597fb-kube-api-access-d8kvd\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:40:20.747799 master-0 kubenswrapper[7271]: I0313 10:40:20.747779 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:40:20.747900 master-0 kubenswrapper[7271]: I0313 10:40:20.747884 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5448b59a-b731-45a3-9ded-d25315f597fb-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:40:20.747982 master-0 kubenswrapper[7271]: I0313 10:40:20.747970 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.748101 master-0 kubenswrapper[7271]: I0313 10:40:20.748087 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.748181 master-0 kubenswrapper[7271]: I0313 10:40:20.748169 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48nns\" (UniqueName: \"kubernetes.io/projected/5b796628-a6ca-4d5c-9870-0ca60b9372aa-kube-api-access-48nns\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.748253 master-0 kubenswrapper[7271]: I0313 10:40:20.748242 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5b796628-a6ca-4d5c-9870-0ca60b9372aa-metrics-client-ca\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.748462 master-0 kubenswrapper[7271]: I0313 10:40:20.748324 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-sys\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.748544 master-0 kubenswrapper[7271]: I0313 10:40:20.748532 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.748643 master-0 kubenswrapper[7271]: I0313 10:40:20.748625 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:40:20.748729 master-0 kubenswrapper[7271]: I0313 10:40:20.748717 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-textfile\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.748811 master-0 kubenswrapper[7271]: I0313 10:40:20.748798 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.748885 master-0 kubenswrapper[7271]: I0313 10:40:20.748871 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-wtmp\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.748987 master-0 kubenswrapper[7271]: I0313 10:40:20.748975 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-root\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.749063 master-0 kubenswrapper[7271]: I0313 10:40:20.749051 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.749147 master-0 kubenswrapper[7271]: I0313 10:40:20.749134 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdg6f\" (UniqueName: \"kubernetes.io/projected/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-api-access-qdg6f\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.749221 master-0 kubenswrapper[7271]: I0313 10:40:20.749209 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.749289 master-0 kubenswrapper[7271]: I0313 10:40:20.749278 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-tls\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.755813 master-0 kubenswrapper[7271]: I0313 10:40:20.753362 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5448b59a-b731-45a3-9ded-d25315f597fb-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:40:20.755813 master-0 kubenswrapper[7271]: I0313 10:40:20.753401 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:40:20.755813 master-0 kubenswrapper[7271]: I0313 10:40:20.755559 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:40:20.794531 master-0 kubenswrapper[7271]: I0313 10:40:20.794472 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8kvd\" (UniqueName: \"kubernetes.io/projected/5448b59a-b731-45a3-9ded-d25315f597fb-kube-api-access-d8kvd\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:40:20.851836 master-0 kubenswrapper[7271]: I0313 10:40:20.850189 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-textfile\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.851836 master-0 kubenswrapper[7271]: I0313 10:40:20.850255 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.851836 master-0 kubenswrapper[7271]: I0313 10:40:20.850769 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-root\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.851836 master-0 kubenswrapper[7271]: I0313 10:40:20.850809 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-textfile\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.851836 master-0 kubenswrapper[7271]: I0313 10:40:20.850815 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-wtmp\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.851836 master-0 kubenswrapper[7271]: I0313 10:40:20.850874 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.851836 master-0 kubenswrapper[7271]: I0313 10:40:20.850931 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdg6f\" (UniqueName: \"kubernetes.io/projected/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-api-access-qdg6f\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.851836 master-0 kubenswrapper[7271]: I0313 10:40:20.850959 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.851836 master-0 kubenswrapper[7271]: I0313 10:40:20.850989 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-tls\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.851836 master-0 kubenswrapper[7271]: I0313 10:40:20.851075 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.851836 master-0 kubenswrapper[7271]: I0313 10:40:20.851116 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.851836 master-0 kubenswrapper[7271]: I0313 10:40:20.851140 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48nns\" (UniqueName: \"kubernetes.io/projected/5b796628-a6ca-4d5c-9870-0ca60b9372aa-kube-api-access-48nns\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.857445 master-0 kubenswrapper[7271]: I0313 10:40:20.851349 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-wtmp\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.857445 master-0 kubenswrapper[7271]: I0313 10:40:20.851599 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5b796628-a6ca-4d5c-9870-0ca60b9372aa-metrics-client-ca\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.857445 master-0 kubenswrapper[7271]: I0313 10:40:20.851795 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.857445 master-0 kubenswrapper[7271]: I0313 10:40:20.854833 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-sys\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.857445 master-0 kubenswrapper[7271]: I0313 10:40:20.852881 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5b796628-a6ca-4d5c-9870-0ca60b9372aa-metrics-client-ca\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.857445 master-0 kubenswrapper[7271]: I0313 10:40:20.851835 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-root\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.857445 master-0 kubenswrapper[7271]: I0313 10:40:20.853538 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.857445 master-0 kubenswrapper[7271]: I0313 10:40:20.854802 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-sys\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.857445 master-0 kubenswrapper[7271]: I0313 10:40:20.854974 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.857445 master-0 kubenswrapper[7271]: I0313 10:40:20.853122 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.858519 master-0 kubenswrapper[7271]: I0313 10:40:20.858464 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.858822 master-0 kubenswrapper[7271]: I0313 10:40:20.858776 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.859409 master-0 kubenswrapper[7271]: I0313 10:40:20.859357 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-tls\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.862986 master-0 kubenswrapper[7271]: I0313 10:40:20.862165 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.879966 master-0 kubenswrapper[7271]: I0313 10:40:20.879899 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48nns\" (UniqueName: \"kubernetes.io/projected/5b796628-a6ca-4d5c-9870-0ca60b9372aa-kube-api-access-48nns\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:20.883382 master-0 kubenswrapper[7271]: I0313 10:40:20.883218 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdg6f\" (UniqueName: \"kubernetes.io/projected/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-api-access-qdg6f\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:20.885867 master-0 kubenswrapper[7271]: I0313 10:40:20.885832 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:20.885867 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:20.885867 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:20.885867 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:20.886134 master-0 kubenswrapper[7271]: I0313 10:40:20.886092 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:21.008268 master-0 kubenswrapper[7271]: I0313 10:40:21.007519 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:40:21.111068 master-0 kubenswrapper[7271]: I0313 10:40:21.111006 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:40:21.135091 master-0 kubenswrapper[7271]: I0313 10:40:21.134982 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:40:21.370449 master-0 kubenswrapper[7271]: I0313 10:40:21.370392 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm"] Mar 13 10:40:21.518199 master-0 kubenswrapper[7271]: I0313 10:40:21.518143 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn"] Mar 13 10:40:21.820572 master-0 kubenswrapper[7271]: I0313 10:40:21.820499 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mtcsw" event={"ID":"5b796628-a6ca-4d5c-9870-0ca60b9372aa","Type":"ContainerStarted","Data":"6724c795aeefb2de7ccb8edf6dd545a4648253bccf79de04ddb0f389fe53a8e7"} Mar 13 10:40:21.823194 master-0 kubenswrapper[7271]: I0313 10:40:21.822465 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" event={"ID":"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8","Type":"ContainerStarted","Data":"2b19f149420c8c5bdd28117ec0014c144ba254d289aeb742b7f29c424c5d661a"} Mar 13 10:40:21.824806 master-0 kubenswrapper[7271]: I0313 10:40:21.824780 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" event={"ID":"5448b59a-b731-45a3-9ded-d25315f597fb","Type":"ContainerStarted","Data":"dad2f49f9008c8518151c19405d8439b51be283f0e1b4b5bb490e23350203145"} Mar 13 10:40:21.824886 master-0 kubenswrapper[7271]: I0313 10:40:21.824812 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" event={"ID":"5448b59a-b731-45a3-9ded-d25315f597fb","Type":"ContainerStarted","Data":"77c970e127ad99655147e9b65a84b3f61bc8a959cc617424e07e8058847e4379"} Mar 13 10:40:21.824886 master-0 kubenswrapper[7271]: I0313 10:40:21.824829 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" event={"ID":"5448b59a-b731-45a3-9ded-d25315f597fb","Type":"ContainerStarted","Data":"324185d8aba3ef3e122592b3ddf0fb321d8d4d7598b9bfc330b8735d340f3d78"} Mar 13 10:40:21.886474 master-0 kubenswrapper[7271]: I0313 10:40:21.886368 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:21.886474 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:21.886474 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:21.886474 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:21.886474 master-0 kubenswrapper[7271]: I0313 10:40:21.886454 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:22.886178 master-0 kubenswrapper[7271]: I0313 10:40:22.885449 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:22.886178 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:22.886178 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:22.886178 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:22.886178 master-0 kubenswrapper[7271]: I0313 10:40:22.885511 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:23.843561 master-0 kubenswrapper[7271]: I0313 10:40:23.843476 7271 generic.go:334] "Generic (PLEG): container finished" podID="5b796628-a6ca-4d5c-9870-0ca60b9372aa" containerID="577603115cdc92c071dc30636bcb46ec49417f5c3a611797a0ac27b51d21642e" exitCode=0 Mar 13 10:40:23.843561 master-0 kubenswrapper[7271]: I0313 10:40:23.843537 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mtcsw" event={"ID":"5b796628-a6ca-4d5c-9870-0ca60b9372aa","Type":"ContainerDied","Data":"577603115cdc92c071dc30636bcb46ec49417f5c3a611797a0ac27b51d21642e"} Mar 13 10:40:23.882840 master-0 kubenswrapper[7271]: I0313 10:40:23.882777 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:23.882840 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:23.882840 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:23.882840 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:23.883091 master-0 kubenswrapper[7271]: I0313 10:40:23.882849 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:24.853305 master-0 kubenswrapper[7271]: I0313 10:40:24.853215 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mtcsw" event={"ID":"5b796628-a6ca-4d5c-9870-0ca60b9372aa","Type":"ContainerStarted","Data":"0609a0843424216d58fe74ce545120d852a9eaf8a20b3a7476d485f33e103592"} Mar 13 10:40:24.853305 master-0 kubenswrapper[7271]: I0313 10:40:24.853272 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mtcsw" event={"ID":"5b796628-a6ca-4d5c-9870-0ca60b9372aa","Type":"ContainerStarted","Data":"c5a92364ea3fbcd12bd2fa88facd16fb284236b39f4554aed061ae93ca54be09"} Mar 13 10:40:24.857714 master-0 kubenswrapper[7271]: I0313 10:40:24.857461 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" event={"ID":"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8","Type":"ContainerStarted","Data":"ecf2a94f5744b0cca2e72fd6d41ee477092a04228ec4accbf484c341924855df"} Mar 13 10:40:24.857714 master-0 kubenswrapper[7271]: I0313 10:40:24.857490 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" event={"ID":"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8","Type":"ContainerStarted","Data":"ab445f250c1827d0cda335ed4c70a3039bc98f3bbb2629c590db523a40c711de"} Mar 13 10:40:24.857714 master-0 kubenswrapper[7271]: I0313 10:40:24.857499 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" event={"ID":"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8","Type":"ContainerStarted","Data":"7c90999e7bdfe04bacd8eedb02b39b33d65f469c3d2057eeb85b21e87211b186"} Mar 13 10:40:24.860785 master-0 kubenswrapper[7271]: I0313 10:40:24.860751 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" event={"ID":"5448b59a-b731-45a3-9ded-d25315f597fb","Type":"ContainerStarted","Data":"744180f661cfc45848ab063152471a48aa47cdf91f1a427c30212d83bbda0506"} Mar 13 10:40:24.872190 master-0 kubenswrapper[7271]: I0313 10:40:24.870539 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-mtcsw" podStartSLOduration=3.436853824 podStartE2EDuration="4.870513043s" podCreationTimestamp="2026-03-13 10:40:20 +0000 UTC" firstStartedPulling="2026-03-13 10:40:21.151804032 +0000 UTC m=+275.678626422" lastFinishedPulling="2026-03-13 10:40:22.585463251 +0000 UTC m=+277.112285641" observedRunningTime="2026-03-13 10:40:24.868449727 +0000 UTC m=+279.395272117" watchObservedRunningTime="2026-03-13 10:40:24.870513043 +0000 UTC m=+279.397335453" Mar 13 10:40:24.884307 master-0 kubenswrapper[7271]: I0313 10:40:24.882737 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:24.884307 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:24.884307 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:24.884307 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:24.884307 master-0 kubenswrapper[7271]: I0313 10:40:24.882810 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:24.884307 master-0 kubenswrapper[7271]: I0313 10:40:24.883850 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" podStartSLOduration=2.89411684 podStartE2EDuration="4.883829891s" podCreationTimestamp="2026-03-13 10:40:20 +0000 UTC" firstStartedPulling="2026-03-13 10:40:21.78242982 +0000 UTC m=+276.309252210" lastFinishedPulling="2026-03-13 10:40:23.772142871 +0000 UTC m=+278.298965261" observedRunningTime="2026-03-13 10:40:24.882117075 +0000 UTC m=+279.408939475" watchObservedRunningTime="2026-03-13 10:40:24.883829891 +0000 UTC m=+279.410652281" Mar 13 10:40:24.903088 master-0 kubenswrapper[7271]: I0313 10:40:24.902954 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" podStartSLOduration=2.698854995 podStartE2EDuration="4.902931336s" podCreationTimestamp="2026-03-13 10:40:20 +0000 UTC" firstStartedPulling="2026-03-13 10:40:21.566003834 +0000 UTC m=+276.092826224" lastFinishedPulling="2026-03-13 10:40:23.770080175 +0000 UTC m=+278.296902565" observedRunningTime="2026-03-13 10:40:24.900149081 +0000 UTC m=+279.426971471" watchObservedRunningTime="2026-03-13 10:40:24.902931336 +0000 UTC m=+279.429753736" Mar 13 10:40:25.882778 master-0 kubenswrapper[7271]: I0313 10:40:25.882724 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:25.882778 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:25.882778 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:25.882778 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:25.883160 master-0 kubenswrapper[7271]: I0313 10:40:25.882801 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:26.275768 master-0 kubenswrapper[7271]: I0313 10:40:26.275621 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-68597ccc5b-xrb8c"] Mar 13 10:40:26.277149 master-0 kubenswrapper[7271]: I0313 10:40:26.276691 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.281142 master-0 kubenswrapper[7271]: I0313 10:40:26.278649 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 13 10:40:26.281142 master-0 kubenswrapper[7271]: I0313 10:40:26.278687 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-jkx4c" Mar 13 10:40:26.281504 master-0 kubenswrapper[7271]: I0313 10:40:26.281374 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 13 10:40:26.281914 master-0 kubenswrapper[7271]: I0313 10:40:26.281870 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7mk1tpvcusf46" Mar 13 10:40:26.281966 master-0 kubenswrapper[7271]: I0313 10:40:26.281937 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 13 10:40:26.282548 master-0 kubenswrapper[7271]: I0313 10:40:26.282521 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 13 10:40:26.292547 master-0 kubenswrapper[7271]: I0313 10:40:26.292344 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-68597ccc5b-xrb8c"] Mar 13 10:40:26.356518 master-0 kubenswrapper[7271]: I0313 10:40:26.356472 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-client-certs\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.356777 master-0 kubenswrapper[7271]: I0313 10:40:26.356758 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.356929 master-0 kubenswrapper[7271]: I0313 10:40:26.356913 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5hq9\" (UniqueName: \"kubernetes.io/projected/b68ed803-45e2-42f1-99b1-33cf59b01d74-kube-api-access-q5hq9\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.357034 master-0 kubenswrapper[7271]: I0313 10:40:26.357020 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.357152 master-0 kubenswrapper[7271]: I0313 10:40:26.357132 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.357457 master-0 kubenswrapper[7271]: I0313 10:40:26.357387 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-metrics-server-audit-profiles\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.357652 master-0 kubenswrapper[7271]: I0313 10:40:26.357568 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b68ed803-45e2-42f1-99b1-33cf59b01d74-audit-log\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.459694 master-0 kubenswrapper[7271]: I0313 10:40:26.459596 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-client-certs\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.459694 master-0 kubenswrapper[7271]: I0313 10:40:26.459688 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.459994 master-0 kubenswrapper[7271]: I0313 10:40:26.459733 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5hq9\" (UniqueName: \"kubernetes.io/projected/b68ed803-45e2-42f1-99b1-33cf59b01d74-kube-api-access-q5hq9\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.459994 master-0 kubenswrapper[7271]: I0313 10:40:26.459785 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.460213 master-0 kubenswrapper[7271]: I0313 10:40:26.460152 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.460431 master-0 kubenswrapper[7271]: I0313 10:40:26.460399 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-metrics-server-audit-profiles\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.461774 master-0 kubenswrapper[7271]: I0313 10:40:26.460470 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b68ed803-45e2-42f1-99b1-33cf59b01d74-audit-log\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.461774 master-0 kubenswrapper[7271]: I0313 10:40:26.461020 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b68ed803-45e2-42f1-99b1-33cf59b01d74-audit-log\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.461774 master-0 kubenswrapper[7271]: I0313 10:40:26.461129 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.461774 master-0 kubenswrapper[7271]: I0313 10:40:26.461728 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-metrics-server-audit-profiles\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.463663 master-0 kubenswrapper[7271]: I0313 10:40:26.463590 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.463976 master-0 kubenswrapper[7271]: I0313 10:40:26.463910 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-client-certs\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.465588 master-0 kubenswrapper[7271]: I0313 10:40:26.465473 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.479739 master-0 kubenswrapper[7271]: I0313 10:40:26.479692 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5hq9\" (UniqueName: \"kubernetes.io/projected/b68ed803-45e2-42f1-99b1-33cf59b01d74-kube-api-access-q5hq9\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.600342 master-0 kubenswrapper[7271]: I0313 10:40:26.600268 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:26.883322 master-0 kubenswrapper[7271]: I0313 10:40:26.883103 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:26.883322 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:26.883322 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:26.883322 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:26.883322 master-0 kubenswrapper[7271]: I0313 10:40:26.883196 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:27.000399 master-0 kubenswrapper[7271]: I0313 10:40:27.000332 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-68597ccc5b-xrb8c"] Mar 13 10:40:27.001489 master-0 kubenswrapper[7271]: W0313 10:40:27.001430 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb68ed803_45e2_42f1_99b1_33cf59b01d74.slice/crio-823dd75fa90067312411e552ed573617320e3f633eba91399bcfb19342dfaab8 WatchSource:0}: Error finding container 823dd75fa90067312411e552ed573617320e3f633eba91399bcfb19342dfaab8: Status 404 returned error can't find the container with id 823dd75fa90067312411e552ed573617320e3f633eba91399bcfb19342dfaab8 Mar 13 10:40:27.882447 master-0 kubenswrapper[7271]: I0313 10:40:27.882381 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:27.882447 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:27.882447 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:27.882447 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:27.882838 master-0 kubenswrapper[7271]: I0313 10:40:27.882447 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:27.883833 master-0 kubenswrapper[7271]: I0313 10:40:27.883786 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" event={"ID":"b68ed803-45e2-42f1-99b1-33cf59b01d74","Type":"ContainerStarted","Data":"823dd75fa90067312411e552ed573617320e3f633eba91399bcfb19342dfaab8"} Mar 13 10:40:28.882457 master-0 kubenswrapper[7271]: I0313 10:40:28.882384 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:28.882457 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:28.882457 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:28.882457 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:28.882824 master-0 kubenswrapper[7271]: I0313 10:40:28.882457 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:28.891720 master-0 kubenswrapper[7271]: I0313 10:40:28.891660 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" event={"ID":"b68ed803-45e2-42f1-99b1-33cf59b01d74","Type":"ContainerStarted","Data":"a53ccb10d38781462661d28f14cee8ad4f8374b8664112cbbcf7c91c9615f04e"} Mar 13 10:40:28.906420 master-0 kubenswrapper[7271]: I0313 10:40:28.906343 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" podStartSLOduration=1.4265453080000001 podStartE2EDuration="2.90632016s" podCreationTimestamp="2026-03-13 10:40:26 +0000 UTC" firstStartedPulling="2026-03-13 10:40:27.003789767 +0000 UTC m=+281.530612157" lastFinishedPulling="2026-03-13 10:40:28.483564619 +0000 UTC m=+283.010387009" observedRunningTime="2026-03-13 10:40:28.904389768 +0000 UTC m=+283.431212158" watchObservedRunningTime="2026-03-13 10:40:28.90632016 +0000 UTC m=+283.433142540" Mar 13 10:40:29.882990 master-0 kubenswrapper[7271]: I0313 10:40:29.882942 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:29.882990 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:29.882990 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:29.882990 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:29.883371 master-0 kubenswrapper[7271]: I0313 10:40:29.883342 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:30.883041 master-0 kubenswrapper[7271]: I0313 10:40:30.882970 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:30.883041 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:30.883041 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:30.883041 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:30.883950 master-0 kubenswrapper[7271]: I0313 10:40:30.883063 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:31.883023 master-0 kubenswrapper[7271]: I0313 10:40:31.882692 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:31.883023 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:31.883023 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:31.883023 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:31.883023 master-0 kubenswrapper[7271]: I0313 10:40:31.882760 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:32.883085 master-0 kubenswrapper[7271]: I0313 10:40:32.883022 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:32.883085 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:32.883085 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:32.883085 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:32.883823 master-0 kubenswrapper[7271]: I0313 10:40:32.883106 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:33.882989 master-0 kubenswrapper[7271]: I0313 10:40:33.882907 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:33.882989 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:33.882989 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:33.882989 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:33.882989 master-0 kubenswrapper[7271]: I0313 10:40:33.882991 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:34.882979 master-0 kubenswrapper[7271]: I0313 10:40:34.882921 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:34.882979 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:34.882979 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:34.882979 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:34.883330 master-0 kubenswrapper[7271]: I0313 10:40:34.882991 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:35.882654 master-0 kubenswrapper[7271]: I0313 10:40:35.882579 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:35.882654 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:35.882654 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:35.882654 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:35.883372 master-0 kubenswrapper[7271]: I0313 10:40:35.882693 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:36.883236 master-0 kubenswrapper[7271]: I0313 10:40:36.883139 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:36.883236 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:36.883236 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:36.883236 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:36.883236 master-0 kubenswrapper[7271]: I0313 10:40:36.883239 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:37.882660 master-0 kubenswrapper[7271]: I0313 10:40:37.882573 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:37.882660 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:37.882660 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:37.882660 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:37.882965 master-0 kubenswrapper[7271]: I0313 10:40:37.882681 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:38.883915 master-0 kubenswrapper[7271]: I0313 10:40:38.883824 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:38.883915 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:38.883915 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:38.883915 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:38.884755 master-0 kubenswrapper[7271]: I0313 10:40:38.883937 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:39.883017 master-0 kubenswrapper[7271]: I0313 10:40:39.882943 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:39.883017 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:39.883017 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:39.883017 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:39.883458 master-0 kubenswrapper[7271]: I0313 10:40:39.883027 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:40.883628 master-0 kubenswrapper[7271]: I0313 10:40:40.883472 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:40.883628 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:40.883628 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:40.883628 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:40.883628 master-0 kubenswrapper[7271]: I0313 10:40:40.883572 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:41.883514 master-0 kubenswrapper[7271]: I0313 10:40:41.883383 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:41.883514 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:41.883514 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:41.883514 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:41.883514 master-0 kubenswrapper[7271]: I0313 10:40:41.883513 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:42.882697 master-0 kubenswrapper[7271]: I0313 10:40:42.882623 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:42.882697 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:42.882697 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:42.882697 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:42.882697 master-0 kubenswrapper[7271]: I0313 10:40:42.882688 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:43.883377 master-0 kubenswrapper[7271]: I0313 10:40:43.883289 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:43.883377 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:43.883377 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:43.883377 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:43.884068 master-0 kubenswrapper[7271]: I0313 10:40:43.883376 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:44.883162 master-0 kubenswrapper[7271]: I0313 10:40:44.883091 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:44.883162 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:44.883162 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:44.883162 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:44.883993 master-0 kubenswrapper[7271]: I0313 10:40:44.883183 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:45.733648 master-0 kubenswrapper[7271]: I0313 10:40:45.733557 7271 scope.go:117] "RemoveContainer" containerID="4f342d2d66294bd06ac08cc498f323a859474645f1865395b674bff6a68af1e6" Mar 13 10:40:45.883016 master-0 kubenswrapper[7271]: I0313 10:40:45.882891 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:45.883016 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:45.883016 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:45.883016 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:45.883348 master-0 kubenswrapper[7271]: I0313 10:40:45.883059 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:46.600767 master-0 kubenswrapper[7271]: I0313 10:40:46.600704 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:46.601326 master-0 kubenswrapper[7271]: I0313 10:40:46.600807 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:40:46.883395 master-0 kubenswrapper[7271]: I0313 10:40:46.883240 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:46.883395 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:46.883395 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:46.883395 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:46.883395 master-0 kubenswrapper[7271]: I0313 10:40:46.883329 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:47.883379 master-0 kubenswrapper[7271]: I0313 10:40:47.883273 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:47.883379 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:47.883379 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:47.883379 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:47.884077 master-0 kubenswrapper[7271]: I0313 10:40:47.883395 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:48.883213 master-0 kubenswrapper[7271]: I0313 10:40:48.883118 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:48.883213 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:48.883213 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:48.883213 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:48.883792 master-0 kubenswrapper[7271]: I0313 10:40:48.883234 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:49.882809 master-0 kubenswrapper[7271]: I0313 10:40:49.882749 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:49.882809 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:49.882809 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:49.882809 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:49.883107 master-0 kubenswrapper[7271]: I0313 10:40:49.882820 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:50.882632 master-0 kubenswrapper[7271]: I0313 10:40:50.882527 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:50.882632 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:50.882632 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:50.882632 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:50.883341 master-0 kubenswrapper[7271]: I0313 10:40:50.882635 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:51.883306 master-0 kubenswrapper[7271]: I0313 10:40:51.883247 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:51.883306 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:51.883306 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:51.883306 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:51.883919 master-0 kubenswrapper[7271]: I0313 10:40:51.883326 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:52.043310 master-0 kubenswrapper[7271]: I0313 10:40:52.043244 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/0.log" Mar 13 10:40:52.043310 master-0 kubenswrapper[7271]: I0313 10:40:52.043306 7271 generic.go:334] "Generic (PLEG): container finished" podID="7667717b-fb74-456b-8615-16475cb69e98" containerID="8931f468146aea32eb1151d08ef9573b7c8bddcc57495ce9f6bd5b790621abc0" exitCode=1 Mar 13 10:40:52.043612 master-0 kubenswrapper[7271]: I0313 10:40:52.043345 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerDied","Data":"8931f468146aea32eb1151d08ef9573b7c8bddcc57495ce9f6bd5b790621abc0"} Mar 13 10:40:52.043980 master-0 kubenswrapper[7271]: I0313 10:40:52.043947 7271 scope.go:117] "RemoveContainer" containerID="8931f468146aea32eb1151d08ef9573b7c8bddcc57495ce9f6bd5b790621abc0" Mar 13 10:40:52.882934 master-0 kubenswrapper[7271]: I0313 10:40:52.882842 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:52.882934 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:52.882934 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:52.882934 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:52.883996 master-0 kubenswrapper[7271]: I0313 10:40:52.882955 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:53.058620 master-0 kubenswrapper[7271]: I0313 10:40:53.058539 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/0.log" Mar 13 10:40:53.058921 master-0 kubenswrapper[7271]: I0313 10:40:53.058668 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerStarted","Data":"246797499d890bbe0f0da9bedf22922185a5c85e0c93f20f83953bdd9898d644"} Mar 13 10:40:53.882904 master-0 kubenswrapper[7271]: I0313 10:40:53.882847 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:53.882904 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:53.882904 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:53.882904 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:53.883358 master-0 kubenswrapper[7271]: I0313 10:40:53.883321 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:54.883221 master-0 kubenswrapper[7271]: I0313 10:40:54.883108 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:54.883221 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:54.883221 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:54.883221 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:54.884419 master-0 kubenswrapper[7271]: I0313 10:40:54.883223 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:55.881773 master-0 kubenswrapper[7271]: I0313 10:40:55.881710 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:55.881773 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:55.881773 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:55.881773 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:55.882143 master-0 kubenswrapper[7271]: I0313 10:40:55.881788 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:56.883782 master-0 kubenswrapper[7271]: I0313 10:40:56.883684 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:56.883782 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:56.883782 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:56.883782 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:56.884881 master-0 kubenswrapper[7271]: I0313 10:40:56.883799 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:57.883421 master-0 kubenswrapper[7271]: I0313 10:40:57.883362 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:57.883421 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:57.883421 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:57.883421 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:57.884320 master-0 kubenswrapper[7271]: I0313 10:40:57.883440 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:58.882778 master-0 kubenswrapper[7271]: I0313 10:40:58.882711 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:58.882778 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:58.882778 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:58.882778 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:58.883185 master-0 kubenswrapper[7271]: I0313 10:40:58.882797 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:40:59.883283 master-0 kubenswrapper[7271]: I0313 10:40:59.883187 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:40:59.883283 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:40:59.883283 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:40:59.883283 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:40:59.883862 master-0 kubenswrapper[7271]: I0313 10:40:59.883315 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:00.113510 master-0 kubenswrapper[7271]: I0313 10:41:00.113433 7271 generic.go:334] "Generic (PLEG): container finished" podID="c87545aa-11c2-4e6e-8c13-16eeff3be83b" containerID="a54ca7738955f7ec185b4cde3784d0158686a36edc078876172035717347c129" exitCode=0 Mar 13 10:41:00.113510 master-0 kubenswrapper[7271]: I0313 10:41:00.113494 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" event={"ID":"c87545aa-11c2-4e6e-8c13-16eeff3be83b","Type":"ContainerDied","Data":"a54ca7738955f7ec185b4cde3784d0158686a36edc078876172035717347c129"} Mar 13 10:41:00.114063 master-0 kubenswrapper[7271]: I0313 10:41:00.114023 7271 scope.go:117] "RemoveContainer" containerID="a54ca7738955f7ec185b4cde3784d0158686a36edc078876172035717347c129" Mar 13 10:41:00.882775 master-0 kubenswrapper[7271]: I0313 10:41:00.882694 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:00.882775 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:00.882775 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:00.882775 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:00.882775 master-0 kubenswrapper[7271]: I0313 10:41:00.882768 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:01.122662 master-0 kubenswrapper[7271]: I0313 10:41:01.122577 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" event={"ID":"c87545aa-11c2-4e6e-8c13-16eeff3be83b","Type":"ContainerStarted","Data":"c1021907f78c49f4369cc2d4436d324e405452a204403884d4d7c11bd378aa13"} Mar 13 10:41:01.882873 master-0 kubenswrapper[7271]: I0313 10:41:01.882801 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:01.882873 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:01.882873 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:01.882873 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:01.882873 master-0 kubenswrapper[7271]: I0313 10:41:01.882869 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:02.882910 master-0 kubenswrapper[7271]: I0313 10:41:02.882835 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:02.882910 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:02.882910 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:02.882910 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:02.883564 master-0 kubenswrapper[7271]: I0313 10:41:02.882941 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:03.882799 master-0 kubenswrapper[7271]: I0313 10:41:03.882711 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:03.882799 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:03.882799 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:03.882799 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:03.883470 master-0 kubenswrapper[7271]: I0313 10:41:03.882826 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:04.883370 master-0 kubenswrapper[7271]: I0313 10:41:04.883281 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:04.883370 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:04.883370 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:04.883370 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:04.884050 master-0 kubenswrapper[7271]: I0313 10:41:04.883373 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:05.882916 master-0 kubenswrapper[7271]: I0313 10:41:05.882853 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:05.882916 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:05.882916 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:05.882916 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:05.883238 master-0 kubenswrapper[7271]: I0313 10:41:05.882936 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:06.608644 master-0 kubenswrapper[7271]: I0313 10:41:06.608561 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:41:06.613873 master-0 kubenswrapper[7271]: I0313 10:41:06.613830 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:41:06.882696 master-0 kubenswrapper[7271]: I0313 10:41:06.882566 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:06.882696 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:06.882696 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:06.882696 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:06.883014 master-0 kubenswrapper[7271]: I0313 10:41:06.882986 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:07.884087 master-0 kubenswrapper[7271]: I0313 10:41:07.883990 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:07.884087 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:07.884087 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:07.884087 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:07.885172 master-0 kubenswrapper[7271]: I0313 10:41:07.884094 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:08.883952 master-0 kubenswrapper[7271]: I0313 10:41:08.883886 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:08.883952 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:08.883952 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:08.883952 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:08.884713 master-0 kubenswrapper[7271]: I0313 10:41:08.883961 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:09.882813 master-0 kubenswrapper[7271]: I0313 10:41:09.882736 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:09.882813 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:09.882813 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:09.882813 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:09.883122 master-0 kubenswrapper[7271]: I0313 10:41:09.882829 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:10.882318 master-0 kubenswrapper[7271]: I0313 10:41:10.882254 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:10.882318 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:10.882318 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:10.882318 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:10.882964 master-0 kubenswrapper[7271]: I0313 10:41:10.882343 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:10.980017 master-0 kubenswrapper[7271]: I0313 10:41:10.979935 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-cwlxw_1434c4a2-5c4d-478a-a16a-7d6a52ea3099/authentication-operator/0.log" Mar 13 10:41:11.179742 master-0 kubenswrapper[7271]: I0313 10:41:11.179610 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-cwlxw_1434c4a2-5c4d-478a-a16a-7d6a52ea3099/authentication-operator/1.log" Mar 13 10:41:11.376333 master-0 kubenswrapper[7271]: I0313 10:41:11.376182 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-79f8cd6fdd-b4x54_eb778c86-ea51-4eab-82b8-a8e0bec0f050/router/0.log" Mar 13 10:41:11.570926 master-0 kubenswrapper[7271]: I0313 10:41:11.570871 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-778fb45b4-65f7b_4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b/fix-audit-permissions/0.log" Mar 13 10:41:11.777792 master-0 kubenswrapper[7271]: I0313 10:41:11.777724 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-778fb45b4-65f7b_4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b/oauth-apiserver/0.log" Mar 13 10:41:11.882247 master-0 kubenswrapper[7271]: I0313 10:41:11.882108 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:11.882247 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:11.882247 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:11.882247 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:11.882247 master-0 kubenswrapper[7271]: I0313 10:41:11.882194 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:11.977331 master-0 kubenswrapper[7271]: I0313 10:41:11.977281 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-df8wr_574bf255-14b3-40af-b240-2d3abd5b86b8/etcd-operator/0.log" Mar 13 10:41:12.174478 master-0 kubenswrapper[7271]: I0313 10:41:12.174357 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-df8wr_574bf255-14b3-40af-b240-2d3abd5b86b8/etcd-operator/1.log" Mar 13 10:41:12.372336 master-0 kubenswrapper[7271]: I0313 10:41:12.372288 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/setup/0.log" Mar 13 10:41:12.572241 master-0 kubenswrapper[7271]: I0313 10:41:12.572191 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-ensure-env-vars/0.log" Mar 13 10:41:12.771544 master-0 kubenswrapper[7271]: I0313 10:41:12.771499 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-resources-copy/0.log" Mar 13 10:41:12.883162 master-0 kubenswrapper[7271]: I0313 10:41:12.883035 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:12.883162 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:12.883162 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:12.883162 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:12.883162 master-0 kubenswrapper[7271]: I0313 10:41:12.883100 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:12.971811 master-0 kubenswrapper[7271]: I0313 10:41:12.971753 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 13 10:41:13.175705 master-0 kubenswrapper[7271]: I0313 10:41:13.175599 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 13 10:41:13.374801 master-0 kubenswrapper[7271]: I0313 10:41:13.374746 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 10:41:13.571706 master-0 kubenswrapper[7271]: I0313 10:41:13.571646 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-readyz/0.log" Mar 13 10:41:13.772270 master-0 kubenswrapper[7271]: I0313 10:41:13.772230 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 10:41:13.883122 master-0 kubenswrapper[7271]: I0313 10:41:13.883006 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:13.883122 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:13.883122 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:13.883122 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:13.883122 master-0 kubenswrapper[7271]: I0313 10:41:13.883082 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:13.975087 master-0 kubenswrapper[7271]: I0313 10:41:13.975026 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_00e8e251-40d9-458a-92a7-9b2e91dc7359/installer/0.log" Mar 13 10:41:14.178026 master-0 kubenswrapper[7271]: I0313 10:41:14.177893 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-vqdk8_86ae8cb8-72b3-4be6-9feb-ee0c0da42dba/kube-apiserver-operator/0.log" Mar 13 10:41:14.373322 master-0 kubenswrapper[7271]: I0313 10:41:14.373267 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-vqdk8_86ae8cb8-72b3-4be6-9feb-ee0c0da42dba/kube-apiserver-operator/1.log" Mar 13 10:41:14.572835 master-0 kubenswrapper[7271]: I0313 10:41:14.572791 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/setup/0.log" Mar 13 10:41:14.774978 master-0 kubenswrapper[7271]: I0313 10:41:14.774936 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/kube-apiserver/0.log" Mar 13 10:41:14.882650 master-0 kubenswrapper[7271]: I0313 10:41:14.882511 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:14.882650 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:14.882650 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:14.882650 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:14.882650 master-0 kubenswrapper[7271]: I0313 10:41:14.882579 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:14.972037 master-0 kubenswrapper[7271]: I0313 10:41:14.971982 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/kube-apiserver-insecure-readyz/0.log" Mar 13 10:41:15.175010 master-0 kubenswrapper[7271]: I0313 10:41:15.174883 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_7baf3efc-04dc-4c17-9c2a-397ac022d281/installer/0.log" Mar 13 10:41:15.375136 master-0 kubenswrapper[7271]: I0313 10:41:15.375077 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_feb7b798-15b5-4004-87d0-96ce9381cdbe/installer/0.log" Mar 13 10:41:15.579174 master-0 kubenswrapper[7271]: I0313 10:41:15.579120 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-px9bl_ec3168fc-6c8f-4603-94e0-17b1ae22a802/kube-controller-manager-operator/0.log" Mar 13 10:41:15.773475 master-0 kubenswrapper[7271]: I0313 10:41:15.773400 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-px9bl_ec3168fc-6c8f-4603-94e0-17b1ae22a802/kube-controller-manager-operator/1.log" Mar 13 10:41:15.884439 master-0 kubenswrapper[7271]: I0313 10:41:15.884272 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:15.884439 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:15.884439 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:15.884439 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:15.884439 master-0 kubenswrapper[7271]: I0313 10:41:15.884377 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:15.978427 master-0 kubenswrapper[7271]: I0313 10:41:15.978364 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/kube-controller-manager/2.log" Mar 13 10:41:16.378224 master-0 kubenswrapper[7271]: I0313 10:41:16.378170 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/kube-controller-manager/3.log" Mar 13 10:41:16.577072 master-0 kubenswrapper[7271]: I0313 10:41:16.576997 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/cluster-policy-controller/0.log" Mar 13 10:41:16.774622 master-0 kubenswrapper[7271]: I0313 10:41:16.774472 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_a1a56802af72ce1aac6b5077f1695ac0/kube-scheduler/0.log" Mar 13 10:41:16.882553 master-0 kubenswrapper[7271]: I0313 10:41:16.882487 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:16.882553 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:16.882553 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:16.882553 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:16.882932 master-0 kubenswrapper[7271]: I0313 10:41:16.882562 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:16.976676 master-0 kubenswrapper[7271]: I0313 10:41:16.976630 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_a1a56802af72ce1aac6b5077f1695ac0/kube-scheduler/1.log" Mar 13 10:41:17.174132 master-0 kubenswrapper[7271]: I0313 10:41:17.174077 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_9e06733a-9c47-4bcf-a5e2-946db8e2714b/installer/0.log" Mar 13 10:41:17.374184 master-0 kubenswrapper[7271]: I0313 10:41:17.374081 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-dpslh_8f9db15a-8854-485b-9863-9cbe5dddd977/kube-scheduler-operator-container/0.log" Mar 13 10:41:17.572772 master-0 kubenswrapper[7271]: I0313 10:41:17.572692 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-dpslh_8f9db15a-8854-485b-9863-9cbe5dddd977/kube-scheduler-operator-container/1.log" Mar 13 10:41:17.884862 master-0 kubenswrapper[7271]: I0313 10:41:17.884730 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:17.884862 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:17.884862 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:17.884862 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:17.884862 master-0 kubenswrapper[7271]: I0313 10:41:17.884822 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:17.976662 master-0 kubenswrapper[7271]: I0313 10:41:17.976529 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-f46qd_257a4a8b-014c-4473-80a0-e95cf6d41bf1/manager/0.log" Mar 13 10:41:18.176215 master-0 kubenswrapper[7271]: I0313 10:41:18.175995 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-f46qd_257a4a8b-014c-4473-80a0-e95cf6d41bf1/kube-rbac-proxy/0.log" Mar 13 10:41:18.372013 master-0 kubenswrapper[7271]: I0313 10:41:18.371965 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-f46qd_257a4a8b-014c-4473-80a0-e95cf6d41bf1/manager/1.log" Mar 13 10:41:18.577480 master-0 kubenswrapper[7271]: I0313 10:41:18.577439 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-bg6zf_b10584c2-ef04-4649-bcb6-9222c9530c3f/manager/0.log" Mar 13 10:41:18.883384 master-0 kubenswrapper[7271]: I0313 10:41:18.883229 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:18.883384 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:18.883384 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:18.883384 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:18.883384 master-0 kubenswrapper[7271]: I0313 10:41:18.883317 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:18.976798 master-0 kubenswrapper[7271]: I0313 10:41:18.976740 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-bg6zf_b10584c2-ef04-4649-bcb6-9222c9530c3f/manager/1.log" Mar 13 10:41:19.176880 master-0 kubenswrapper[7271]: I0313 10:41:19.176686 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-bg6zf_b10584c2-ef04-4649-bcb6-9222c9530c3f/kube-rbac-proxy/0.log" Mar 13 10:41:19.777200 master-0 kubenswrapper[7271]: I0313 10:41:19.777143 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77899cf6d-kh9h2_b8d40b37-0f3d-4531-9fa8-eda965d2337d/cluster-olm-operator/0.log" Mar 13 10:41:19.882510 master-0 kubenswrapper[7271]: I0313 10:41:19.882466 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:19.882510 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:19.882510 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:19.882510 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:19.883031 master-0 kubenswrapper[7271]: I0313 10:41:19.882996 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:19.971237 master-0 kubenswrapper[7271]: I0313 10:41:19.971185 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77899cf6d-kh9h2_b8d40b37-0f3d-4531-9fa8-eda965d2337d/copy-catalogd-manifests/0.log" Mar 13 10:41:20.171287 master-0 kubenswrapper[7271]: I0313 10:41:20.171225 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77899cf6d-kh9h2_b8d40b37-0f3d-4531-9fa8-eda965d2337d/copy-operator-controller-manifests/0.log" Mar 13 10:41:20.372475 master-0 kubenswrapper[7271]: I0313 10:41:20.372437 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77899cf6d-kh9h2_b8d40b37-0f3d-4531-9fa8-eda965d2337d/cluster-olm-operator/1.log" Mar 13 10:41:20.576368 master-0 kubenswrapper[7271]: I0313 10:41:20.576318 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-sdg4w_5ed5e77b-948b-4d94-ac9f-440ee3c07e18/openshift-apiserver-operator/0.log" Mar 13 10:41:20.776101 master-0 kubenswrapper[7271]: I0313 10:41:20.776006 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-sdg4w_5ed5e77b-948b-4d94-ac9f-440ee3c07e18/openshift-apiserver-operator/1.log" Mar 13 10:41:20.883298 master-0 kubenswrapper[7271]: I0313 10:41:20.883140 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:20.883298 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:20.883298 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:20.883298 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:20.883298 master-0 kubenswrapper[7271]: I0313 10:41:20.883219 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:20.973509 master-0 kubenswrapper[7271]: I0313 10:41:20.973453 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-65bc99cdf7-7rjbr_1d72d950-cfb4-4ed5-9ad6-f7266b937493/fix-audit-permissions/0.log" Mar 13 10:41:21.179733 master-0 kubenswrapper[7271]: I0313 10:41:21.179465 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-65bc99cdf7-7rjbr_1d72d950-cfb4-4ed5-9ad6-f7266b937493/openshift-apiserver/0.log" Mar 13 10:41:21.373974 master-0 kubenswrapper[7271]: I0313 10:41:21.373875 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-65bc99cdf7-7rjbr_1d72d950-cfb4-4ed5-9ad6-f7266b937493/openshift-apiserver-check-endpoints/0.log" Mar 13 10:41:21.574119 master-0 kubenswrapper[7271]: I0313 10:41:21.574025 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-df8wr_574bf255-14b3-40af-b240-2d3abd5b86b8/etcd-operator/0.log" Mar 13 10:41:21.773161 master-0 kubenswrapper[7271]: I0313 10:41:21.773094 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-df8wr_574bf255-14b3-40af-b240-2d3abd5b86b8/etcd-operator/1.log" Mar 13 10:41:21.882353 master-0 kubenswrapper[7271]: I0313 10:41:21.882174 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:21.882353 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:21.882353 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:21.882353 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:21.882353 master-0 kubenswrapper[7271]: I0313 10:41:21.882235 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:21.972318 master-0 kubenswrapper[7271]: I0313 10:41:21.972265 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-nsg74_282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/openshift-controller-manager-operator/1.log" Mar 13 10:41:22.173512 master-0 kubenswrapper[7271]: I0313 10:41:22.173365 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-nsg74_282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/openshift-controller-manager-operator/2.log" Mar 13 10:41:22.376335 master-0 kubenswrapper[7271]: I0313 10:41:22.376284 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6954c8766d-g8z48_6317b62a-46e2-4a45-9c29-cb04c40d4425/controller-manager/0.log" Mar 13 10:41:22.576230 master-0 kubenswrapper[7271]: I0313 10:41:22.576151 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6954c8766d-g8z48_6317b62a-46e2-4a45-9c29-cb04c40d4425/controller-manager/1.log" Mar 13 10:41:22.778157 master-0 kubenswrapper[7271]: I0313 10:41:22.778093 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-657b8bf46d-r5dxm_d239be49-f88d-46e3-a101-3a46119597ce/route-controller-manager/0.log" Mar 13 10:41:22.883807 master-0 kubenswrapper[7271]: I0313 10:41:22.883638 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:22.883807 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:22.883807 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:22.883807 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:22.884053 master-0 kubenswrapper[7271]: I0313 10:41:22.883784 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:22.978015 master-0 kubenswrapper[7271]: I0313 10:41:22.977969 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-2j5jl_c455a959-d764-4b4f-a1e0-95c27495dd9d/catalog-operator/0.log" Mar 13 10:41:23.176890 master-0 kubenswrapper[7271]: I0313 10:41:23.176737 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-rsl2h_2afe3890-e844-4dd3-ba49-3ac9178549bf/olm-operator/0.log" Mar 13 10:41:23.374207 master-0 kubenswrapper[7271]: I0313 10:41:23.374109 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-d5b45_8a305f45-8689-45a8-8c8b-5954f2c863df/kube-rbac-proxy/0.log" Mar 13 10:41:23.574493 master-0 kubenswrapper[7271]: I0313 10:41:23.574456 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-d5b45_8a305f45-8689-45a8-8c8b-5954f2c863df/package-server-manager/0.log" Mar 13 10:41:23.783275 master-0 kubenswrapper[7271]: I0313 10:41:23.783215 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-7b564dfc5b-qc9cq_1edde4bf-4554-4ab2-b588-513ad84a9bae/packageserver/0.log" Mar 13 10:41:23.882389 master-0 kubenswrapper[7271]: I0313 10:41:23.882228 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:23.882389 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:23.882389 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:23.882389 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:23.882389 master-0 kubenswrapper[7271]: I0313 10:41:23.882321 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:24.883733 master-0 kubenswrapper[7271]: I0313 10:41:24.883632 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:24.883733 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:24.883733 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:24.883733 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:24.884432 master-0 kubenswrapper[7271]: I0313 10:41:24.883770 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:25.882104 master-0 kubenswrapper[7271]: I0313 10:41:25.882039 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:25.882104 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:25.882104 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:25.882104 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:25.882526 master-0 kubenswrapper[7271]: I0313 10:41:25.882126 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:26.882606 master-0 kubenswrapper[7271]: I0313 10:41:26.882488 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:26.882606 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:26.882606 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:26.882606 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:26.883198 master-0 kubenswrapper[7271]: I0313 10:41:26.882628 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:27.890890 master-0 kubenswrapper[7271]: I0313 10:41:27.890829 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:27.890890 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:27.890890 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:27.890890 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:27.891709 master-0 kubenswrapper[7271]: I0313 10:41:27.890893 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:28.882710 master-0 kubenswrapper[7271]: I0313 10:41:28.882642 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:28.882710 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:28.882710 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:28.882710 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:28.883005 master-0 kubenswrapper[7271]: I0313 10:41:28.882731 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:29.883224 master-0 kubenswrapper[7271]: I0313 10:41:29.883122 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:29.883224 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:29.883224 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:29.883224 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:29.884442 master-0 kubenswrapper[7271]: I0313 10:41:29.883232 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:30.884212 master-0 kubenswrapper[7271]: I0313 10:41:30.883960 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:30.884212 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:30.884212 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:30.884212 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:30.884212 master-0 kubenswrapper[7271]: I0313 10:41:30.884065 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:31.882965 master-0 kubenswrapper[7271]: I0313 10:41:31.882882 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:31.882965 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:31.882965 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:31.882965 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:31.883412 master-0 kubenswrapper[7271]: I0313 10:41:31.882977 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:32.883874 master-0 kubenswrapper[7271]: I0313 10:41:32.883779 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:32.883874 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:32.883874 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:32.883874 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:32.883874 master-0 kubenswrapper[7271]: I0313 10:41:32.883862 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:33.883896 master-0 kubenswrapper[7271]: I0313 10:41:33.883797 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:33.883896 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:33.883896 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:33.883896 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:33.884752 master-0 kubenswrapper[7271]: I0313 10:41:33.883901 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:34.883104 master-0 kubenswrapper[7271]: I0313 10:41:34.883004 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:34.883104 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:34.883104 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:34.883104 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:34.883740 master-0 kubenswrapper[7271]: I0313 10:41:34.883110 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:35.883794 master-0 kubenswrapper[7271]: I0313 10:41:35.883689 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:35.883794 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:35.883794 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:35.883794 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:35.884740 master-0 kubenswrapper[7271]: I0313 10:41:35.883825 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:36.883262 master-0 kubenswrapper[7271]: I0313 10:41:36.883183 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:36.883262 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:36.883262 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:36.883262 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:36.884526 master-0 kubenswrapper[7271]: I0313 10:41:36.883315 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:37.882411 master-0 kubenswrapper[7271]: I0313 10:41:37.882344 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:37.882411 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:37.882411 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:37.882411 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:37.882860 master-0 kubenswrapper[7271]: I0313 10:41:37.882421 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:38.883948 master-0 kubenswrapper[7271]: I0313 10:41:38.883852 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:38.883948 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:38.883948 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:38.883948 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:38.885114 master-0 kubenswrapper[7271]: I0313 10:41:38.883994 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:39.884194 master-0 kubenswrapper[7271]: I0313 10:41:39.884131 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:39.884194 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:39.884194 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:39.884194 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:39.884942 master-0 kubenswrapper[7271]: I0313 10:41:39.884199 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:40.884238 master-0 kubenswrapper[7271]: I0313 10:41:40.884131 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:40.884238 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:40.884238 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:40.884238 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:40.884860 master-0 kubenswrapper[7271]: I0313 10:41:40.884296 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:41.883568 master-0 kubenswrapper[7271]: I0313 10:41:41.883501 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:41.883568 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:41.883568 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:41.883568 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:41.884021 master-0 kubenswrapper[7271]: I0313 10:41:41.883579 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:42.883665 master-0 kubenswrapper[7271]: I0313 10:41:42.883535 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:42.883665 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:42.883665 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:42.883665 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:42.883665 master-0 kubenswrapper[7271]: I0313 10:41:42.883639 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:43.884184 master-0 kubenswrapper[7271]: I0313 10:41:43.884099 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:43.884184 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:43.884184 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:43.884184 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:43.884184 master-0 kubenswrapper[7271]: I0313 10:41:43.884168 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:44.882056 master-0 kubenswrapper[7271]: I0313 10:41:44.882001 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:44.882056 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:44.882056 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:44.882056 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:44.882376 master-0 kubenswrapper[7271]: I0313 10:41:44.882073 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:46.029023 master-0 kubenswrapper[7271]: I0313 10:41:46.028844 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:46.029023 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:46.029023 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:46.029023 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:46.029023 master-0 kubenswrapper[7271]: I0313 10:41:46.028935 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:46.882178 master-0 kubenswrapper[7271]: I0313 10:41:46.882098 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:46.882178 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:46.882178 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:46.882178 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:46.882477 master-0 kubenswrapper[7271]: I0313 10:41:46.882178 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:47.883144 master-0 kubenswrapper[7271]: I0313 10:41:47.883050 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:47.883144 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:47.883144 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:47.883144 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:47.884004 master-0 kubenswrapper[7271]: I0313 10:41:47.883144 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:48.882698 master-0 kubenswrapper[7271]: I0313 10:41:48.882644 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:48.882698 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:48.882698 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:48.882698 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:48.883039 master-0 kubenswrapper[7271]: I0313 10:41:48.882714 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:49.881873 master-0 kubenswrapper[7271]: I0313 10:41:49.881820 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:49.881873 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:49.881873 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:49.881873 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:49.882630 master-0 kubenswrapper[7271]: I0313 10:41:49.881877 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:50.882772 master-0 kubenswrapper[7271]: I0313 10:41:50.882708 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:50.882772 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:50.882772 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:50.882772 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:50.883566 master-0 kubenswrapper[7271]: I0313 10:41:50.882779 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:51.882971 master-0 kubenswrapper[7271]: I0313 10:41:51.882910 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:51.882971 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:51.882971 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:51.882971 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:51.883523 master-0 kubenswrapper[7271]: I0313 10:41:51.882977 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:52.882946 master-0 kubenswrapper[7271]: I0313 10:41:52.882874 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:52.882946 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:52.882946 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:52.882946 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:52.883517 master-0 kubenswrapper[7271]: I0313 10:41:52.882956 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:53.883612 master-0 kubenswrapper[7271]: I0313 10:41:53.883515 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:53.883612 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:53.883612 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:53.883612 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:53.884284 master-0 kubenswrapper[7271]: I0313 10:41:53.883651 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:54.883364 master-0 kubenswrapper[7271]: I0313 10:41:54.883302 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:54.883364 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:54.883364 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:54.883364 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:54.884119 master-0 kubenswrapper[7271]: I0313 10:41:54.883382 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:55.883171 master-0 kubenswrapper[7271]: I0313 10:41:55.883090 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:55.883171 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:55.883171 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:55.883171 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:55.883617 master-0 kubenswrapper[7271]: I0313 10:41:55.883187 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:56.883730 master-0 kubenswrapper[7271]: I0313 10:41:56.883643 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:56.883730 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:56.883730 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:56.883730 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:56.884647 master-0 kubenswrapper[7271]: I0313 10:41:56.883737 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:57.882974 master-0 kubenswrapper[7271]: I0313 10:41:57.882904 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:57.882974 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:57.882974 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:57.882974 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:57.883347 master-0 kubenswrapper[7271]: I0313 10:41:57.882984 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:58.883371 master-0 kubenswrapper[7271]: I0313 10:41:58.883288 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:58.883371 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:58.883371 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:58.883371 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:58.884115 master-0 kubenswrapper[7271]: I0313 10:41:58.883417 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:41:59.882495 master-0 kubenswrapper[7271]: I0313 10:41:59.882435 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:41:59.882495 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:41:59.882495 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:41:59.882495 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:41:59.882818 master-0 kubenswrapper[7271]: I0313 10:41:59.882506 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:42:00.882197 master-0 kubenswrapper[7271]: I0313 10:42:00.882145 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:42:00.882197 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:42:00.882197 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:42:00.882197 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:42:00.882863 master-0 kubenswrapper[7271]: I0313 10:42:00.882235 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:42:01.882993 master-0 kubenswrapper[7271]: I0313 10:42:01.882906 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:42:01.882993 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:42:01.882993 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:42:01.882993 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:42:01.883742 master-0 kubenswrapper[7271]: I0313 10:42:01.883030 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:42:02.882747 master-0 kubenswrapper[7271]: I0313 10:42:02.882707 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:42:02.882747 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:42:02.882747 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:42:02.882747 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:42:02.883080 master-0 kubenswrapper[7271]: I0313 10:42:02.883053 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:42:03.882680 master-0 kubenswrapper[7271]: I0313 10:42:03.882619 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:42:03.882680 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:42:03.882680 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:42:03.882680 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:42:03.882989 master-0 kubenswrapper[7271]: I0313 10:42:03.882709 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:42:04.882728 master-0 kubenswrapper[7271]: I0313 10:42:04.882659 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:42:04.882728 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:42:04.882728 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:42:04.882728 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:42:04.882728 master-0 kubenswrapper[7271]: I0313 10:42:04.882735 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:42:05.882706 master-0 kubenswrapper[7271]: I0313 10:42:05.882604 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:42:05.882706 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:42:05.882706 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:42:05.882706 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:42:05.883314 master-0 kubenswrapper[7271]: I0313 10:42:05.882703 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:42:05.883314 master-0 kubenswrapper[7271]: I0313 10:42:05.882773 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:42:05.883571 master-0 kubenswrapper[7271]: I0313 10:42:05.883532 7271 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"c38e1852651e9aa29b7c4aa782bd48bf04b7ff3ecd204555f9421edc8fb3fef6"} pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" containerMessage="Container router failed startup probe, will be restarted" Mar 13 10:42:05.883648 master-0 kubenswrapper[7271]: I0313 10:42:05.883578 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" containerID="cri-o://c38e1852651e9aa29b7c4aa782bd48bf04b7ff3ecd204555f9421edc8fb3fef6" gracePeriod=3600 Mar 13 10:42:52.432241 master-0 kubenswrapper[7271]: I0313 10:42:52.432198 7271 generic.go:334] "Generic (PLEG): container finished" podID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerID="c38e1852651e9aa29b7c4aa782bd48bf04b7ff3ecd204555f9421edc8fb3fef6" exitCode=0 Mar 13 10:42:52.432852 master-0 kubenswrapper[7271]: I0313 10:42:52.432265 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" event={"ID":"eb778c86-ea51-4eab-82b8-a8e0bec0f050","Type":"ContainerDied","Data":"c38e1852651e9aa29b7c4aa782bd48bf04b7ff3ecd204555f9421edc8fb3fef6"} Mar 13 10:42:52.432941 master-0 kubenswrapper[7271]: I0313 10:42:52.432926 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" event={"ID":"eb778c86-ea51-4eab-82b8-a8e0bec0f050","Type":"ContainerStarted","Data":"9aabceaa9098fa374fa3be7884e41fb57131871ca89880498f237e8d19971731"} Mar 13 10:42:52.880764 master-0 kubenswrapper[7271]: I0313 10:42:52.880694 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:42:52.880764 master-0 kubenswrapper[7271]: I0313 10:42:52.880758 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:42:52.883434 master-0 kubenswrapper[7271]: I0313 10:42:52.883371 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:42:52.883434 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:42:52.883434 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:42:52.883434 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:42:52.883655 master-0 kubenswrapper[7271]: I0313 10:42:52.883443 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:42:53.883480 master-0 kubenswrapper[7271]: I0313 10:42:53.883346 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:42:53.883480 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:42:53.883480 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:42:53.883480 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:42:53.884491 master-0 kubenswrapper[7271]: I0313 10:42:53.883504 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:42:54.276761 master-0 kubenswrapper[7271]: I0313 10:42:54.273556 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-dxhl9"] Mar 13 10:42:54.276761 master-0 kubenswrapper[7271]: I0313 10:42:54.274660 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dxhl9" Mar 13 10:42:54.277682 master-0 kubenswrapper[7271]: I0313 10:42:54.277184 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 10:42:54.278229 master-0 kubenswrapper[7271]: I0313 10:42:54.278210 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 10:42:54.278506 master-0 kubenswrapper[7271]: I0313 10:42:54.278487 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-27hpj" Mar 13 10:42:54.278849 master-0 kubenswrapper[7271]: I0313 10:42:54.278702 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 10:42:54.296236 master-0 kubenswrapper[7271]: I0313 10:42:54.296169 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-cert\") pod \"ingress-canary-dxhl9\" (UID: \"05a72a4c-5ce8-49d1-8e4f-334f63d4e987\") " pod="openshift-ingress-canary/ingress-canary-dxhl9" Mar 13 10:42:54.296470 master-0 kubenswrapper[7271]: I0313 10:42:54.296286 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btws6\" (UniqueName: \"kubernetes.io/projected/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-kube-api-access-btws6\") pod \"ingress-canary-dxhl9\" (UID: \"05a72a4c-5ce8-49d1-8e4f-334f63d4e987\") " pod="openshift-ingress-canary/ingress-canary-dxhl9" Mar 13 10:42:54.298855 master-0 kubenswrapper[7271]: I0313 10:42:54.298830 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dxhl9"] Mar 13 10:42:54.399132 master-0 kubenswrapper[7271]: I0313 10:42:54.398635 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-cert\") pod \"ingress-canary-dxhl9\" (UID: \"05a72a4c-5ce8-49d1-8e4f-334f63d4e987\") " pod="openshift-ingress-canary/ingress-canary-dxhl9" Mar 13 10:42:54.399132 master-0 kubenswrapper[7271]: I0313 10:42:54.398734 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btws6\" (UniqueName: \"kubernetes.io/projected/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-kube-api-access-btws6\") pod \"ingress-canary-dxhl9\" (UID: \"05a72a4c-5ce8-49d1-8e4f-334f63d4e987\") " pod="openshift-ingress-canary/ingress-canary-dxhl9" Mar 13 10:42:54.410658 master-0 kubenswrapper[7271]: I0313 10:42:54.407358 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-cert\") pod \"ingress-canary-dxhl9\" (UID: \"05a72a4c-5ce8-49d1-8e4f-334f63d4e987\") " pod="openshift-ingress-canary/ingress-canary-dxhl9" Mar 13 10:42:54.417471 master-0 kubenswrapper[7271]: I0313 10:42:54.417147 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btws6\" (UniqueName: \"kubernetes.io/projected/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-kube-api-access-btws6\") pod \"ingress-canary-dxhl9\" (UID: \"05a72a4c-5ce8-49d1-8e4f-334f63d4e987\") " pod="openshift-ingress-canary/ingress-canary-dxhl9" Mar 13 10:42:54.450314 master-0 kubenswrapper[7271]: I0313 10:42:54.450250 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/1.log" Mar 13 10:42:54.451341 master-0 kubenswrapper[7271]: I0313 10:42:54.451299 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/0.log" Mar 13 10:42:54.451427 master-0 kubenswrapper[7271]: I0313 10:42:54.451365 7271 generic.go:334] "Generic (PLEG): container finished" podID="7667717b-fb74-456b-8615-16475cb69e98" containerID="246797499d890bbe0f0da9bedf22922185a5c85e0c93f20f83953bdd9898d644" exitCode=1 Mar 13 10:42:54.451427 master-0 kubenswrapper[7271]: I0313 10:42:54.451404 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerDied","Data":"246797499d890bbe0f0da9bedf22922185a5c85e0c93f20f83953bdd9898d644"} Mar 13 10:42:54.451519 master-0 kubenswrapper[7271]: I0313 10:42:54.451442 7271 scope.go:117] "RemoveContainer" containerID="8931f468146aea32eb1151d08ef9573b7c8bddcc57495ce9f6bd5b790621abc0" Mar 13 10:42:54.452475 master-0 kubenswrapper[7271]: I0313 10:42:54.452452 7271 scope.go:117] "RemoveContainer" containerID="246797499d890bbe0f0da9bedf22922185a5c85e0c93f20f83953bdd9898d644" Mar 13 10:42:54.452739 master-0 kubenswrapper[7271]: E0313 10:42:54.452688 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:42:54.622742 master-0 kubenswrapper[7271]: I0313 10:42:54.622693 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dxhl9" Mar 13 10:42:54.883429 master-0 kubenswrapper[7271]: I0313 10:42:54.883296 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:42:54.883429 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:42:54.883429 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:42:54.883429 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:42:54.883429 master-0 kubenswrapper[7271]: I0313 10:42:54.883368 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:42:55.070628 master-0 kubenswrapper[7271]: I0313 10:42:55.070558 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dxhl9"] Mar 13 10:42:55.079844 master-0 kubenswrapper[7271]: W0313 10:42:55.079778 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05a72a4c_5ce8_49d1_8e4f_334f63d4e987.slice/crio-cc41129be016cbf901b0a2cc5f025302b5359f5df446ef56a8371863c53e45e5 WatchSource:0}: Error finding container cc41129be016cbf901b0a2cc5f025302b5359f5df446ef56a8371863c53e45e5: Status 404 returned error can't find the container with id cc41129be016cbf901b0a2cc5f025302b5359f5df446ef56a8371863c53e45e5 Mar 13 10:42:55.458405 master-0 kubenswrapper[7271]: I0313 10:42:55.458252 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dxhl9" event={"ID":"05a72a4c-5ce8-49d1-8e4f-334f63d4e987","Type":"ContainerStarted","Data":"dde045a43c6e976b1c45a6a1120a6c0f1b675b4289624ebe97574cb098a2cb23"} Mar 13 10:42:55.458405 master-0 kubenswrapper[7271]: I0313 10:42:55.458310 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dxhl9" event={"ID":"05a72a4c-5ce8-49d1-8e4f-334f63d4e987","Type":"ContainerStarted","Data":"cc41129be016cbf901b0a2cc5f025302b5359f5df446ef56a8371863c53e45e5"} Mar 13 10:42:55.459892 master-0 kubenswrapper[7271]: I0313 10:42:55.459806 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/1.log" Mar 13 10:42:55.474135 master-0 kubenswrapper[7271]: I0313 10:42:55.474046 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-dxhl9" podStartSLOduration=1.474022851 podStartE2EDuration="1.474022851s" podCreationTimestamp="2026-03-13 10:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:42:55.472078699 +0000 UTC m=+429.998901089" watchObservedRunningTime="2026-03-13 10:42:55.474022851 +0000 UTC m=+430.000845241" Mar 13 10:42:55.888862 master-0 kubenswrapper[7271]: I0313 10:42:55.888785 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:42:55.888862 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:42:55.888862 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:42:55.888862 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:42:55.889717 master-0 kubenswrapper[7271]: I0313 10:42:55.888888 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:42:56.884414 master-0 kubenswrapper[7271]: I0313 10:42:56.884322 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:42:56.884414 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:42:56.884414 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:42:56.884414 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:42:56.884903 master-0 kubenswrapper[7271]: I0313 10:42:56.884437 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:42:57.883031 master-0 kubenswrapper[7271]: I0313 10:42:57.882969 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:42:57.883031 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:42:57.883031 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:42:57.883031 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:42:57.883923 master-0 kubenswrapper[7271]: I0313 10:42:57.883047 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:42:58.882424 master-0 kubenswrapper[7271]: I0313 10:42:58.882321 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:42:58.882424 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:42:58.882424 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:42:58.882424 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:42:58.882899 master-0 kubenswrapper[7271]: I0313 10:42:58.882434 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:42:59.882619 master-0 kubenswrapper[7271]: I0313 10:42:59.882535 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:42:59.882619 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:42:59.882619 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:42:59.882619 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:42:59.882619 master-0 kubenswrapper[7271]: I0313 10:42:59.882615 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:00.883738 master-0 kubenswrapper[7271]: I0313 10:43:00.883647 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:00.883738 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:00.883738 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:00.883738 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:00.884374 master-0 kubenswrapper[7271]: I0313 10:43:00.883756 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:01.883780 master-0 kubenswrapper[7271]: I0313 10:43:01.883701 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:01.883780 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:01.883780 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:01.883780 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:01.884924 master-0 kubenswrapper[7271]: I0313 10:43:01.883799 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:02.882687 master-0 kubenswrapper[7271]: I0313 10:43:02.882615 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:02.882687 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:02.882687 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:02.882687 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:02.883037 master-0 kubenswrapper[7271]: I0313 10:43:02.882697 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:03.882847 master-0 kubenswrapper[7271]: I0313 10:43:03.882785 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:03.882847 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:03.882847 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:03.882847 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:03.884160 master-0 kubenswrapper[7271]: I0313 10:43:03.882856 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:04.883680 master-0 kubenswrapper[7271]: I0313 10:43:04.883621 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:04.883680 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:04.883680 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:04.883680 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:04.884259 master-0 kubenswrapper[7271]: I0313 10:43:04.883688 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:05.882738 master-0 kubenswrapper[7271]: I0313 10:43:05.882654 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:05.882738 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:05.882738 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:05.882738 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:05.883032 master-0 kubenswrapper[7271]: I0313 10:43:05.882752 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:06.646042 master-0 kubenswrapper[7271]: I0313 10:43:06.645980 7271 scope.go:117] "RemoveContainer" containerID="246797499d890bbe0f0da9bedf22922185a5c85e0c93f20f83953bdd9898d644" Mar 13 10:43:06.883308 master-0 kubenswrapper[7271]: I0313 10:43:06.883241 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:06.883308 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:06.883308 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:06.883308 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:06.883654 master-0 kubenswrapper[7271]: I0313 10:43:06.883341 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:07.563995 master-0 kubenswrapper[7271]: I0313 10:43:07.563954 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/1.log" Mar 13 10:43:07.564332 master-0 kubenswrapper[7271]: I0313 10:43:07.564302 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerStarted","Data":"aedbabca0ae1386209b376e594af5a1aca17689f565bb27119f58a8f09e1fc7c"} Mar 13 10:43:07.882987 master-0 kubenswrapper[7271]: I0313 10:43:07.882867 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:07.882987 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:07.882987 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:07.882987 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:07.884154 master-0 kubenswrapper[7271]: I0313 10:43:07.883801 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:08.883705 master-0 kubenswrapper[7271]: I0313 10:43:08.883549 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:08.883705 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:08.883705 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:08.883705 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:08.883705 master-0 kubenswrapper[7271]: I0313 10:43:08.883687 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:09.882787 master-0 kubenswrapper[7271]: I0313 10:43:09.882677 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:09.882787 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:09.882787 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:09.882787 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:09.883685 master-0 kubenswrapper[7271]: I0313 10:43:09.882792 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:10.882507 master-0 kubenswrapper[7271]: I0313 10:43:10.882439 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:10.882507 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:10.882507 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:10.882507 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:10.883090 master-0 kubenswrapper[7271]: I0313 10:43:10.882525 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:11.883812 master-0 kubenswrapper[7271]: I0313 10:43:11.883708 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:11.883812 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:11.883812 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:11.883812 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:11.885101 master-0 kubenswrapper[7271]: I0313 10:43:11.883848 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:12.883673 master-0 kubenswrapper[7271]: I0313 10:43:12.883566 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:12.883673 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:12.883673 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:12.883673 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:12.884331 master-0 kubenswrapper[7271]: I0313 10:43:12.883708 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:13.883055 master-0 kubenswrapper[7271]: I0313 10:43:13.882985 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:13.883055 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:13.883055 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:13.883055 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:13.883342 master-0 kubenswrapper[7271]: I0313 10:43:13.883060 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:14.883667 master-0 kubenswrapper[7271]: I0313 10:43:14.883498 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:14.883667 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:14.883667 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:14.883667 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:14.883667 master-0 kubenswrapper[7271]: I0313 10:43:14.883575 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:15.883691 master-0 kubenswrapper[7271]: I0313 10:43:15.883574 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:15.883691 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:15.883691 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:15.883691 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:15.884565 master-0 kubenswrapper[7271]: I0313 10:43:15.883739 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:16.883087 master-0 kubenswrapper[7271]: I0313 10:43:16.882986 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:16.883087 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:16.883087 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:16.883087 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:16.883087 master-0 kubenswrapper[7271]: I0313 10:43:16.883068 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:17.882945 master-0 kubenswrapper[7271]: I0313 10:43:17.882834 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:17.882945 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:17.882945 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:17.882945 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:17.882945 master-0 kubenswrapper[7271]: I0313 10:43:17.882893 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:18.882870 master-0 kubenswrapper[7271]: I0313 10:43:18.882778 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:18.882870 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:18.882870 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:18.882870 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:18.883938 master-0 kubenswrapper[7271]: I0313 10:43:18.882880 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:19.882968 master-0 kubenswrapper[7271]: I0313 10:43:19.882872 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:19.882968 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:19.882968 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:19.882968 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:19.882968 master-0 kubenswrapper[7271]: I0313 10:43:19.882938 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:20.883092 master-0 kubenswrapper[7271]: I0313 10:43:20.882992 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:20.883092 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:20.883092 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:20.883092 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:20.883092 master-0 kubenswrapper[7271]: I0313 10:43:20.883077 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:21.883602 master-0 kubenswrapper[7271]: I0313 10:43:21.883510 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:21.883602 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:21.883602 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:21.883602 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:21.884337 master-0 kubenswrapper[7271]: I0313 10:43:21.883615 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:22.882704 master-0 kubenswrapper[7271]: I0313 10:43:22.882642 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:22.882704 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:22.882704 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:22.882704 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:22.883020 master-0 kubenswrapper[7271]: I0313 10:43:22.882705 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:23.884138 master-0 kubenswrapper[7271]: I0313 10:43:23.884084 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:23.884138 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:23.884138 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:23.884138 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:23.884757 master-0 kubenswrapper[7271]: I0313 10:43:23.884166 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:24.883789 master-0 kubenswrapper[7271]: I0313 10:43:24.883709 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:24.883789 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:24.883789 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:24.883789 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:24.884702 master-0 kubenswrapper[7271]: I0313 10:43:24.883805 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:25.882745 master-0 kubenswrapper[7271]: I0313 10:43:25.882680 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:25.882745 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:25.882745 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:25.882745 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:25.883027 master-0 kubenswrapper[7271]: I0313 10:43:25.882761 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:26.882266 master-0 kubenswrapper[7271]: I0313 10:43:26.882212 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:26.882266 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:26.882266 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:26.882266 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:26.882964 master-0 kubenswrapper[7271]: I0313 10:43:26.882294 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:27.882648 master-0 kubenswrapper[7271]: I0313 10:43:27.882596 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:27.882648 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:27.882648 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:27.882648 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:27.883322 master-0 kubenswrapper[7271]: I0313 10:43:27.882654 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:28.882233 master-0 kubenswrapper[7271]: I0313 10:43:28.882189 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:28.882233 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:28.882233 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:28.882233 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:28.882515 master-0 kubenswrapper[7271]: I0313 10:43:28.882248 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:29.882863 master-0 kubenswrapper[7271]: I0313 10:43:29.882769 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:29.882863 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:29.882863 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:29.882863 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:29.883474 master-0 kubenswrapper[7271]: I0313 10:43:29.882886 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:30.883366 master-0 kubenswrapper[7271]: I0313 10:43:30.883278 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:30.883366 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:30.883366 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:30.883366 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:30.884070 master-0 kubenswrapper[7271]: I0313 10:43:30.883395 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:31.883013 master-0 kubenswrapper[7271]: I0313 10:43:31.882935 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:31.883013 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:31.883013 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:31.883013 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:31.883013 master-0 kubenswrapper[7271]: I0313 10:43:31.883010 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:32.885300 master-0 kubenswrapper[7271]: I0313 10:43:32.885207 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:32.885300 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:32.885300 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:32.885300 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:32.886065 master-0 kubenswrapper[7271]: I0313 10:43:32.885422 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:33.883241 master-0 kubenswrapper[7271]: I0313 10:43:33.883155 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:33.883241 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:33.883241 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:33.883241 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:33.883754 master-0 kubenswrapper[7271]: I0313 10:43:33.883722 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:34.883335 master-0 kubenswrapper[7271]: I0313 10:43:34.883266 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:34.883335 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:34.883335 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:34.883335 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:34.883939 master-0 kubenswrapper[7271]: I0313 10:43:34.883352 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:35.883274 master-0 kubenswrapper[7271]: I0313 10:43:35.883167 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:35.883274 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:35.883274 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:35.883274 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:35.883274 master-0 kubenswrapper[7271]: I0313 10:43:35.883241 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:36.883616 master-0 kubenswrapper[7271]: I0313 10:43:36.883539 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:36.883616 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:36.883616 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:36.883616 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:36.884230 master-0 kubenswrapper[7271]: I0313 10:43:36.883676 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:37.883316 master-0 kubenswrapper[7271]: I0313 10:43:37.883231 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:37.883316 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:37.883316 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:37.883316 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:37.883316 master-0 kubenswrapper[7271]: I0313 10:43:37.883304 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:38.883167 master-0 kubenswrapper[7271]: I0313 10:43:38.883102 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:38.883167 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:38.883167 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:38.883167 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:38.883895 master-0 kubenswrapper[7271]: I0313 10:43:38.883187 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:39.883085 master-0 kubenswrapper[7271]: I0313 10:43:39.883016 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:39.883085 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:39.883085 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:39.883085 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:39.883897 master-0 kubenswrapper[7271]: I0313 10:43:39.883101 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:40.883213 master-0 kubenswrapper[7271]: I0313 10:43:40.883127 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:40.883213 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:40.883213 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:40.883213 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:40.883792 master-0 kubenswrapper[7271]: I0313 10:43:40.883225 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:41.884044 master-0 kubenswrapper[7271]: I0313 10:43:41.883967 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:41.884044 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:41.884044 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:41.884044 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:41.885084 master-0 kubenswrapper[7271]: I0313 10:43:41.884044 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:42.884166 master-0 kubenswrapper[7271]: I0313 10:43:42.884064 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:42.884166 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:42.884166 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:42.884166 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:42.884924 master-0 kubenswrapper[7271]: I0313 10:43:42.884201 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:43.882699 master-0 kubenswrapper[7271]: I0313 10:43:43.882567 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:43.882699 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:43.882699 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:43.882699 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:43.883003 master-0 kubenswrapper[7271]: I0313 10:43:43.882768 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:44.882380 master-0 kubenswrapper[7271]: I0313 10:43:44.882340 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:44.882380 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:44.882380 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:44.882380 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:44.882968 master-0 kubenswrapper[7271]: I0313 10:43:44.882395 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:45.882766 master-0 kubenswrapper[7271]: I0313 10:43:45.882687 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:45.882766 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:45.882766 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:45.882766 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:45.883478 master-0 kubenswrapper[7271]: I0313 10:43:45.882770 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:46.883557 master-0 kubenswrapper[7271]: I0313 10:43:46.883456 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:46.883557 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:46.883557 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:46.883557 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:46.884347 master-0 kubenswrapper[7271]: I0313 10:43:46.883652 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:47.884110 master-0 kubenswrapper[7271]: I0313 10:43:47.884018 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:47.884110 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:47.884110 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:47.884110 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:47.884895 master-0 kubenswrapper[7271]: I0313 10:43:47.884103 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:48.838115 master-0 kubenswrapper[7271]: I0313 10:43:48.838043 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 10:43:48.839167 master-0 kubenswrapper[7271]: I0313 10:43:48.839127 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 10:43:48.842061 master-0 kubenswrapper[7271]: I0313 10:43:48.842021 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 10:43:48.842238 master-0 kubenswrapper[7271]: I0313 10:43:48.842195 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-928wn" Mar 13 10:43:48.848453 master-0 kubenswrapper[7271]: I0313 10:43:48.848393 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 10:43:48.858719 master-0 kubenswrapper[7271]: I0313 10:43:48.858678 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2107b8fb-e707-4c48-af51-52dd046bf99b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2107b8fb-e707-4c48-af51-52dd046bf99b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 10:43:48.858851 master-0 kubenswrapper[7271]: I0313 10:43:48.858766 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2107b8fb-e707-4c48-af51-52dd046bf99b-var-lock\") pod \"installer-2-master-0\" (UID: \"2107b8fb-e707-4c48-af51-52dd046bf99b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 10:43:48.858851 master-0 kubenswrapper[7271]: I0313 10:43:48.858793 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2107b8fb-e707-4c48-af51-52dd046bf99b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2107b8fb-e707-4c48-af51-52dd046bf99b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 10:43:48.883385 master-0 kubenswrapper[7271]: I0313 10:43:48.882961 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:48.883385 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:48.883385 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:48.883385 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:48.883385 master-0 kubenswrapper[7271]: I0313 10:43:48.883090 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:48.961736 master-0 kubenswrapper[7271]: I0313 10:43:48.961691 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2107b8fb-e707-4c48-af51-52dd046bf99b-var-lock\") pod \"installer-2-master-0\" (UID: \"2107b8fb-e707-4c48-af51-52dd046bf99b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 10:43:48.962423 master-0 kubenswrapper[7271]: I0313 10:43:48.962399 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2107b8fb-e707-4c48-af51-52dd046bf99b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2107b8fb-e707-4c48-af51-52dd046bf99b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 10:43:48.962666 master-0 kubenswrapper[7271]: I0313 10:43:48.962017 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2107b8fb-e707-4c48-af51-52dd046bf99b-var-lock\") pod \"installer-2-master-0\" (UID: \"2107b8fb-e707-4c48-af51-52dd046bf99b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 10:43:48.962736 master-0 kubenswrapper[7271]: I0313 10:43:48.962570 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2107b8fb-e707-4c48-af51-52dd046bf99b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2107b8fb-e707-4c48-af51-52dd046bf99b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 10:43:48.962874 master-0 kubenswrapper[7271]: I0313 10:43:48.962828 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2107b8fb-e707-4c48-af51-52dd046bf99b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2107b8fb-e707-4c48-af51-52dd046bf99b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 10:43:48.980443 master-0 kubenswrapper[7271]: I0313 10:43:48.980369 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2107b8fb-e707-4c48-af51-52dd046bf99b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2107b8fb-e707-4c48-af51-52dd046bf99b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 10:43:49.165866 master-0 kubenswrapper[7271]: I0313 10:43:49.165735 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 10:43:49.544016 master-0 kubenswrapper[7271]: I0313 10:43:49.543852 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 10:43:49.827620 master-0 kubenswrapper[7271]: I0313 10:43:49.827305 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"2107b8fb-e707-4c48-af51-52dd046bf99b","Type":"ContainerStarted","Data":"0a2949eb2340acf4c82bc49edd83681169d3e69b3e85e3a345ba2e00cc6ab753"} Mar 13 10:43:49.882811 master-0 kubenswrapper[7271]: I0313 10:43:49.882763 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:49.882811 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:49.882811 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:49.882811 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:49.883182 master-0 kubenswrapper[7271]: I0313 10:43:49.882820 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:50.631978 master-0 kubenswrapper[7271]: I0313 10:43:50.631608 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6954c8766d-g8z48"] Mar 13 10:43:50.631978 master-0 kubenswrapper[7271]: I0313 10:43:50.631871 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" containerID="cri-o://531f8d3aa930e35bb9ee67f1aa93559ea0aeef92bc7b549aec79dcf9206d8e53" gracePeriod=30 Mar 13 10:43:50.653930 master-0 kubenswrapper[7271]: I0313 10:43:50.653862 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm"] Mar 13 10:43:50.654187 master-0 kubenswrapper[7271]: I0313 10:43:50.654138 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" podUID="d239be49-f88d-46e3-a101-3a46119597ce" containerName="route-controller-manager" containerID="cri-o://9a7412046a658318247dec7713ea99b14482d2ecbdfa4d40aa9244ac9b9a17de" gracePeriod=30 Mar 13 10:43:50.849214 master-0 kubenswrapper[7271]: I0313 10:43:50.849157 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"2107b8fb-e707-4c48-af51-52dd046bf99b","Type":"ContainerStarted","Data":"bfd02001e9f5ee86443a478f0929467e6284c37877a3559705a401614451b035"} Mar 13 10:43:50.850970 master-0 kubenswrapper[7271]: I0313 10:43:50.850894 7271 generic.go:334] "Generic (PLEG): container finished" podID="d239be49-f88d-46e3-a101-3a46119597ce" containerID="9a7412046a658318247dec7713ea99b14482d2ecbdfa4d40aa9244ac9b9a17de" exitCode=0 Mar 13 10:43:50.851045 master-0 kubenswrapper[7271]: I0313 10:43:50.850987 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" event={"ID":"d239be49-f88d-46e3-a101-3a46119597ce","Type":"ContainerDied","Data":"9a7412046a658318247dec7713ea99b14482d2ecbdfa4d40aa9244ac9b9a17de"} Mar 13 10:43:50.853523 master-0 kubenswrapper[7271]: I0313 10:43:50.853482 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6954c8766d-g8z48_6317b62a-46e2-4a45-9c29-cb04c40d4425/controller-manager/0.log" Mar 13 10:43:50.853523 master-0 kubenswrapper[7271]: I0313 10:43:50.853518 7271 generic.go:334] "Generic (PLEG): container finished" podID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerID="531f8d3aa930e35bb9ee67f1aa93559ea0aeef92bc7b549aec79dcf9206d8e53" exitCode=0 Mar 13 10:43:50.853848 master-0 kubenswrapper[7271]: I0313 10:43:50.853542 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" event={"ID":"6317b62a-46e2-4a45-9c29-cb04c40d4425","Type":"ContainerDied","Data":"531f8d3aa930e35bb9ee67f1aa93559ea0aeef92bc7b549aec79dcf9206d8e53"} Mar 13 10:43:50.853848 master-0 kubenswrapper[7271]: I0313 10:43:50.853570 7271 scope.go:117] "RemoveContainer" containerID="e071f5df1cf13730e7c3a2d7e673c1b7527862b8e1f69ed525efba676776f319" Mar 13 10:43:50.869770 master-0 kubenswrapper[7271]: I0313 10:43:50.869529 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=2.869504716 podStartE2EDuration="2.869504716s" podCreationTimestamp="2026-03-13 10:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:43:50.865460736 +0000 UTC m=+485.392283136" watchObservedRunningTime="2026-03-13 10:43:50.869504716 +0000 UTC m=+485.396327106" Mar 13 10:43:50.884669 master-0 kubenswrapper[7271]: I0313 10:43:50.884533 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:50.884669 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:50.884669 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:50.884669 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:50.884669 master-0 kubenswrapper[7271]: I0313 10:43:50.884649 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:51.175473 master-0 kubenswrapper[7271]: I0313 10:43:51.175434 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:43:51.181019 master-0 kubenswrapper[7271]: I0313 10:43:51.180976 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:43:51.310082 master-0 kubenswrapper[7271]: I0313 10:43:51.310003 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d239be49-f88d-46e3-a101-3a46119597ce-config\") pod \"d239be49-f88d-46e3-a101-3a46119597ce\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " Mar 13 10:43:51.310082 master-0 kubenswrapper[7271]: I0313 10:43:51.310083 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d239be49-f88d-46e3-a101-3a46119597ce-serving-cert\") pod \"d239be49-f88d-46e3-a101-3a46119597ce\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " Mar 13 10:43:51.310343 master-0 kubenswrapper[7271]: I0313 10:43:51.310163 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d8dd\" (UniqueName: \"kubernetes.io/projected/6317b62a-46e2-4a45-9c29-cb04c40d4425-kube-api-access-2d8dd\") pod \"6317b62a-46e2-4a45-9c29-cb04c40d4425\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " Mar 13 10:43:51.310343 master-0 kubenswrapper[7271]: I0313 10:43:51.310204 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-client-ca\") pod \"6317b62a-46e2-4a45-9c29-cb04c40d4425\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " Mar 13 10:43:51.310343 master-0 kubenswrapper[7271]: I0313 10:43:51.310234 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-proxy-ca-bundles\") pod \"6317b62a-46e2-4a45-9c29-cb04c40d4425\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " Mar 13 10:43:51.310343 master-0 kubenswrapper[7271]: I0313 10:43:51.310269 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pmgf\" (UniqueName: \"kubernetes.io/projected/d239be49-f88d-46e3-a101-3a46119597ce-kube-api-access-7pmgf\") pod \"d239be49-f88d-46e3-a101-3a46119597ce\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " Mar 13 10:43:51.310343 master-0 kubenswrapper[7271]: I0313 10:43:51.310322 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d239be49-f88d-46e3-a101-3a46119597ce-client-ca\") pod \"d239be49-f88d-46e3-a101-3a46119597ce\" (UID: \"d239be49-f88d-46e3-a101-3a46119597ce\") " Mar 13 10:43:51.310538 master-0 kubenswrapper[7271]: I0313 10:43:51.310377 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-config\") pod \"6317b62a-46e2-4a45-9c29-cb04c40d4425\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " Mar 13 10:43:51.310538 master-0 kubenswrapper[7271]: I0313 10:43:51.310423 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6317b62a-46e2-4a45-9c29-cb04c40d4425-serving-cert\") pod \"6317b62a-46e2-4a45-9c29-cb04c40d4425\" (UID: \"6317b62a-46e2-4a45-9c29-cb04c40d4425\") " Mar 13 10:43:51.310900 master-0 kubenswrapper[7271]: I0313 10:43:51.310828 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d239be49-f88d-46e3-a101-3a46119597ce-config" (OuterVolumeSpecName: "config") pod "d239be49-f88d-46e3-a101-3a46119597ce" (UID: "d239be49-f88d-46e3-a101-3a46119597ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:43:51.311430 master-0 kubenswrapper[7271]: I0313 10:43:51.311385 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6317b62a-46e2-4a45-9c29-cb04c40d4425" (UID: "6317b62a-46e2-4a45-9c29-cb04c40d4425"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:43:51.311430 master-0 kubenswrapper[7271]: I0313 10:43:51.311406 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d239be49-f88d-46e3-a101-3a46119597ce-client-ca" (OuterVolumeSpecName: "client-ca") pod "d239be49-f88d-46e3-a101-3a46119597ce" (UID: "d239be49-f88d-46e3-a101-3a46119597ce"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:43:51.311577 master-0 kubenswrapper[7271]: I0313 10:43:51.311542 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-client-ca" (OuterVolumeSpecName: "client-ca") pod "6317b62a-46e2-4a45-9c29-cb04c40d4425" (UID: "6317b62a-46e2-4a45-9c29-cb04c40d4425"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:43:51.311763 master-0 kubenswrapper[7271]: I0313 10:43:51.311713 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-config" (OuterVolumeSpecName: "config") pod "6317b62a-46e2-4a45-9c29-cb04c40d4425" (UID: "6317b62a-46e2-4a45-9c29-cb04c40d4425"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:43:51.314886 master-0 kubenswrapper[7271]: I0313 10:43:51.314834 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d239be49-f88d-46e3-a101-3a46119597ce-kube-api-access-7pmgf" (OuterVolumeSpecName: "kube-api-access-7pmgf") pod "d239be49-f88d-46e3-a101-3a46119597ce" (UID: "d239be49-f88d-46e3-a101-3a46119597ce"). InnerVolumeSpecName "kube-api-access-7pmgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:43:51.315734 master-0 kubenswrapper[7271]: I0313 10:43:51.315690 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6317b62a-46e2-4a45-9c29-cb04c40d4425-kube-api-access-2d8dd" (OuterVolumeSpecName: "kube-api-access-2d8dd") pod "6317b62a-46e2-4a45-9c29-cb04c40d4425" (UID: "6317b62a-46e2-4a45-9c29-cb04c40d4425"). InnerVolumeSpecName "kube-api-access-2d8dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:43:51.316117 master-0 kubenswrapper[7271]: I0313 10:43:51.316081 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6317b62a-46e2-4a45-9c29-cb04c40d4425-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6317b62a-46e2-4a45-9c29-cb04c40d4425" (UID: "6317b62a-46e2-4a45-9c29-cb04c40d4425"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:43:51.317818 master-0 kubenswrapper[7271]: I0313 10:43:51.317763 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d239be49-f88d-46e3-a101-3a46119597ce-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d239be49-f88d-46e3-a101-3a46119597ce" (UID: "d239be49-f88d-46e3-a101-3a46119597ce"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:43:51.412659 master-0 kubenswrapper[7271]: I0313 10:43:51.412467 7271 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 10:43:51.413018 master-0 kubenswrapper[7271]: I0313 10:43:51.412754 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pmgf\" (UniqueName: \"kubernetes.io/projected/d239be49-f88d-46e3-a101-3a46119597ce-kube-api-access-7pmgf\") on node \"master-0\" DevicePath \"\"" Mar 13 10:43:51.413018 master-0 kubenswrapper[7271]: I0313 10:43:51.412811 7271 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d239be49-f88d-46e3-a101-3a46119597ce-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:43:51.413018 master-0 kubenswrapper[7271]: I0313 10:43:51.412827 7271 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:43:51.413018 master-0 kubenswrapper[7271]: I0313 10:43:51.412839 7271 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6317b62a-46e2-4a45-9c29-cb04c40d4425-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:43:51.413018 master-0 kubenswrapper[7271]: I0313 10:43:51.412850 7271 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d239be49-f88d-46e3-a101-3a46119597ce-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:43:51.413018 master-0 kubenswrapper[7271]: I0313 10:43:51.412862 7271 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d239be49-f88d-46e3-a101-3a46119597ce-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:43:51.413018 master-0 kubenswrapper[7271]: I0313 10:43:51.412872 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d8dd\" (UniqueName: \"kubernetes.io/projected/6317b62a-46e2-4a45-9c29-cb04c40d4425-kube-api-access-2d8dd\") on node \"master-0\" DevicePath \"\"" Mar 13 10:43:51.413018 master-0 kubenswrapper[7271]: I0313 10:43:51.412883 7271 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6317b62a-46e2-4a45-9c29-cb04c40d4425-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:43:51.861571 master-0 kubenswrapper[7271]: I0313 10:43:51.860745 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" event={"ID":"d239be49-f88d-46e3-a101-3a46119597ce","Type":"ContainerDied","Data":"4696b518053adef4bb11b654559eaa82546e4638ff3b69c3346ba410132ca32c"} Mar 13 10:43:51.861571 master-0 kubenswrapper[7271]: I0313 10:43:51.860803 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm" Mar 13 10:43:51.861571 master-0 kubenswrapper[7271]: I0313 10:43:51.860821 7271 scope.go:117] "RemoveContainer" containerID="9a7412046a658318247dec7713ea99b14482d2ecbdfa4d40aa9244ac9b9a17de" Mar 13 10:43:51.862890 master-0 kubenswrapper[7271]: I0313 10:43:51.862861 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" Mar 13 10:43:51.862890 master-0 kubenswrapper[7271]: I0313 10:43:51.862851 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6954c8766d-g8z48" event={"ID":"6317b62a-46e2-4a45-9c29-cb04c40d4425","Type":"ContainerDied","Data":"86cb046a6c3fac4fbe29befba2b5b8736fb3773273af51b8d6b5596b1388eb8c"} Mar 13 10:43:51.875621 master-0 kubenswrapper[7271]: I0313 10:43:51.875563 7271 scope.go:117] "RemoveContainer" containerID="531f8d3aa930e35bb9ee67f1aa93559ea0aeef92bc7b549aec79dcf9206d8e53" Mar 13 10:43:51.883241 master-0 kubenswrapper[7271]: I0313 10:43:51.883194 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:51.883241 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:51.883241 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:51.883241 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:51.883435 master-0 kubenswrapper[7271]: I0313 10:43:51.883280 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:51.885102 master-0 kubenswrapper[7271]: I0313 10:43:51.885051 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6954c8766d-g8z48"] Mar 13 10:43:51.890347 master-0 kubenswrapper[7271]: I0313 10:43:51.890283 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6954c8766d-g8z48"] Mar 13 10:43:51.904915 master-0 kubenswrapper[7271]: I0313 10:43:51.904870 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm"] Mar 13 10:43:51.910627 master-0 kubenswrapper[7271]: I0313 10:43:51.910557 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657b8bf46d-r5dxm"] Mar 13 10:43:52.714619 master-0 kubenswrapper[7271]: I0313 10:43:52.714505 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb"] Mar 13 10:43:52.714962 master-0 kubenswrapper[7271]: E0313 10:43:52.714857 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d239be49-f88d-46e3-a101-3a46119597ce" containerName="route-controller-manager" Mar 13 10:43:52.714962 master-0 kubenswrapper[7271]: I0313 10:43:52.714872 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="d239be49-f88d-46e3-a101-3a46119597ce" containerName="route-controller-manager" Mar 13 10:43:52.714962 master-0 kubenswrapper[7271]: E0313 10:43:52.714885 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" Mar 13 10:43:52.714962 master-0 kubenswrapper[7271]: I0313 10:43:52.714892 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" Mar 13 10:43:52.715108 master-0 kubenswrapper[7271]: I0313 10:43:52.715024 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" Mar 13 10:43:52.715108 master-0 kubenswrapper[7271]: I0313 10:43:52.715040 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" Mar 13 10:43:52.715108 master-0 kubenswrapper[7271]: I0313 10:43:52.715052 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="d239be49-f88d-46e3-a101-3a46119597ce" containerName="route-controller-manager" Mar 13 10:43:52.715561 master-0 kubenswrapper[7271]: I0313 10:43:52.715526 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.717889 master-0 kubenswrapper[7271]: I0313 10:43:52.717833 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 10:43:52.718730 master-0 kubenswrapper[7271]: I0313 10:43:52.718700 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 10:43:52.718812 master-0 kubenswrapper[7271]: I0313 10:43:52.718700 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq"] Mar 13 10:43:52.719124 master-0 kubenswrapper[7271]: E0313 10:43:52.719098 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" Mar 13 10:43:52.719124 master-0 kubenswrapper[7271]: I0313 10:43:52.719120 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" containerName="controller-manager" Mar 13 10:43:52.719735 master-0 kubenswrapper[7271]: I0313 10:43:52.719708 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:52.719877 master-0 kubenswrapper[7271]: I0313 10:43:52.719839 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 10:43:52.720081 master-0 kubenswrapper[7271]: I0313 10:43:52.720046 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 10:43:52.722112 master-0 kubenswrapper[7271]: I0313 10:43:52.722078 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 10:43:52.722203 master-0 kubenswrapper[7271]: I0313 10:43:52.722083 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 10:43:52.722785 master-0 kubenswrapper[7271]: I0313 10:43:52.722742 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-l5tkf" Mar 13 10:43:52.724305 master-0 kubenswrapper[7271]: I0313 10:43:52.724266 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-fn5mm" Mar 13 10:43:52.725902 master-0 kubenswrapper[7271]: I0313 10:43:52.725863 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 10:43:52.726325 master-0 kubenswrapper[7271]: I0313 10:43:52.726282 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 10:43:52.726672 master-0 kubenswrapper[7271]: I0313 10:43:52.726642 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 10:43:52.729097 master-0 kubenswrapper[7271]: I0313 10:43:52.729027 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq"] Mar 13 10:43:52.731443 master-0 kubenswrapper[7271]: I0313 10:43:52.731406 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 10:43:52.736072 master-0 kubenswrapper[7271]: I0313 10:43:52.736008 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 10:43:52.749050 master-0 kubenswrapper[7271]: I0313 10:43:52.748990 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb"] Mar 13 10:43:52.834297 master-0 kubenswrapper[7271]: I0313 10:43:52.834211 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4mbz\" (UniqueName: \"kubernetes.io/projected/928b1766-6bac-4fba-a982-42b050581bd0-kube-api-access-r4mbz\") pod \"route-controller-manager-59bc577c56-74qpq\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:52.834573 master-0 kubenswrapper[7271]: I0313 10:43:52.834361 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j7m9\" (UniqueName: \"kubernetes.io/projected/6a5bf208-6131-44f5-b92e-6962af670a6c-kube-api-access-4j7m9\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.834573 master-0 kubenswrapper[7271]: I0313 10:43:52.834448 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928b1766-6bac-4fba-a982-42b050581bd0-client-ca\") pod \"route-controller-manager-59bc577c56-74qpq\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:52.834714 master-0 kubenswrapper[7271]: I0313 10:43:52.834671 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-config\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.834757 master-0 kubenswrapper[7271]: I0313 10:43:52.834717 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928b1766-6bac-4fba-a982-42b050581bd0-serving-cert\") pod \"route-controller-manager-59bc577c56-74qpq\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:52.834802 master-0 kubenswrapper[7271]: I0313 10:43:52.834770 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928b1766-6bac-4fba-a982-42b050581bd0-config\") pod \"route-controller-manager-59bc577c56-74qpq\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:52.834802 master-0 kubenswrapper[7271]: I0313 10:43:52.834794 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-client-ca\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.834881 master-0 kubenswrapper[7271]: I0313 10:43:52.834864 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a5bf208-6131-44f5-b92e-6962af670a6c-serving-cert\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.836342 master-0 kubenswrapper[7271]: I0313 10:43:52.836321 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-proxy-ca-bundles\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.882268 master-0 kubenswrapper[7271]: I0313 10:43:52.882213 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:52.882268 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:52.882268 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:52.882268 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:52.882268 master-0 kubenswrapper[7271]: I0313 10:43:52.882269 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:52.937916 master-0 kubenswrapper[7271]: I0313 10:43:52.937837 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4mbz\" (UniqueName: \"kubernetes.io/projected/928b1766-6bac-4fba-a982-42b050581bd0-kube-api-access-r4mbz\") pod \"route-controller-manager-59bc577c56-74qpq\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:52.938338 master-0 kubenswrapper[7271]: I0313 10:43:52.938318 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j7m9\" (UniqueName: \"kubernetes.io/projected/6a5bf208-6131-44f5-b92e-6962af670a6c-kube-api-access-4j7m9\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.938470 master-0 kubenswrapper[7271]: I0313 10:43:52.938450 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928b1766-6bac-4fba-a982-42b050581bd0-client-ca\") pod \"route-controller-manager-59bc577c56-74qpq\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:52.938648 master-0 kubenswrapper[7271]: I0313 10:43:52.938628 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-config\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.938979 master-0 kubenswrapper[7271]: I0313 10:43:52.938951 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928b1766-6bac-4fba-a982-42b050581bd0-serving-cert\") pod \"route-controller-manager-59bc577c56-74qpq\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:52.939127 master-0 kubenswrapper[7271]: I0313 10:43:52.939109 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928b1766-6bac-4fba-a982-42b050581bd0-config\") pod \"route-controller-manager-59bc577c56-74qpq\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:52.939254 master-0 kubenswrapper[7271]: I0313 10:43:52.939237 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-client-ca\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.939402 master-0 kubenswrapper[7271]: I0313 10:43:52.939382 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a5bf208-6131-44f5-b92e-6962af670a6c-serving-cert\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.939502 master-0 kubenswrapper[7271]: I0313 10:43:52.939485 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-proxy-ca-bundles\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.939694 master-0 kubenswrapper[7271]: I0313 10:43:52.939648 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928b1766-6bac-4fba-a982-42b050581bd0-client-ca\") pod \"route-controller-manager-59bc577c56-74qpq\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:52.940189 master-0 kubenswrapper[7271]: I0313 10:43:52.940156 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-client-ca\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.940639 master-0 kubenswrapper[7271]: I0313 10:43:52.940605 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928b1766-6bac-4fba-a982-42b050581bd0-config\") pod \"route-controller-manager-59bc577c56-74qpq\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:52.940992 master-0 kubenswrapper[7271]: I0313 10:43:52.940943 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-config\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.941437 master-0 kubenswrapper[7271]: I0313 10:43:52.941410 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-proxy-ca-bundles\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.943450 master-0 kubenswrapper[7271]: I0313 10:43:52.943382 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a5bf208-6131-44f5-b92e-6962af670a6c-serving-cert\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.943897 master-0 kubenswrapper[7271]: I0313 10:43:52.943844 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928b1766-6bac-4fba-a982-42b050581bd0-serving-cert\") pod \"route-controller-manager-59bc577c56-74qpq\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:52.956313 master-0 kubenswrapper[7271]: I0313 10:43:52.956262 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j7m9\" (UniqueName: \"kubernetes.io/projected/6a5bf208-6131-44f5-b92e-6962af670a6c-kube-api-access-4j7m9\") pod \"controller-manager-7f9d5c7f9d-n8tcb\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:52.956688 master-0 kubenswrapper[7271]: I0313 10:43:52.956629 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4mbz\" (UniqueName: \"kubernetes.io/projected/928b1766-6bac-4fba-a982-42b050581bd0-kube-api-access-r4mbz\") pod \"route-controller-manager-59bc577c56-74qpq\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:53.058446 master-0 kubenswrapper[7271]: I0313 10:43:53.058379 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:53.082921 master-0 kubenswrapper[7271]: I0313 10:43:53.082831 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:53.495494 master-0 kubenswrapper[7271]: I0313 10:43:53.495424 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb"] Mar 13 10:43:53.496094 master-0 kubenswrapper[7271]: W0313 10:43:53.496048 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a5bf208_6131_44f5_b92e_6962af670a6c.slice/crio-75ded09f002aee5e524b9c490ee7d7119c38855f13e8560184884503b7c817ed WatchSource:0}: Error finding container 75ded09f002aee5e524b9c490ee7d7119c38855f13e8560184884503b7c817ed: Status 404 returned error can't find the container with id 75ded09f002aee5e524b9c490ee7d7119c38855f13e8560184884503b7c817ed Mar 13 10:43:53.574948 master-0 kubenswrapper[7271]: I0313 10:43:53.574908 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq"] Mar 13 10:43:53.586812 master-0 kubenswrapper[7271]: W0313 10:43:53.586757 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod928b1766_6bac_4fba_a982_42b050581bd0.slice/crio-41b9f064863771d179e00d0168af1f05c078d7dbb7b63cd568d989269e34a9d4 WatchSource:0}: Error finding container 41b9f064863771d179e00d0168af1f05c078d7dbb7b63cd568d989269e34a9d4: Status 404 returned error can't find the container with id 41b9f064863771d179e00d0168af1f05c078d7dbb7b63cd568d989269e34a9d4 Mar 13 10:43:53.656107 master-0 kubenswrapper[7271]: I0313 10:43:53.656059 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6317b62a-46e2-4a45-9c29-cb04c40d4425" path="/var/lib/kubelet/pods/6317b62a-46e2-4a45-9c29-cb04c40d4425/volumes" Mar 13 10:43:53.656933 master-0 kubenswrapper[7271]: I0313 10:43:53.656832 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d239be49-f88d-46e3-a101-3a46119597ce" path="/var/lib/kubelet/pods/d239be49-f88d-46e3-a101-3a46119597ce/volumes" Mar 13 10:43:53.883737 master-0 kubenswrapper[7271]: I0313 10:43:53.883684 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" event={"ID":"928b1766-6bac-4fba-a982-42b050581bd0","Type":"ContainerStarted","Data":"9ca3419fe1bea0f004715b5f12f9d37638ad8ed87d0137b29fcb97cd44dcacd7"} Mar 13 10:43:53.883737 master-0 kubenswrapper[7271]: I0313 10:43:53.883736 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" event={"ID":"928b1766-6bac-4fba-a982-42b050581bd0","Type":"ContainerStarted","Data":"41b9f064863771d179e00d0168af1f05c078d7dbb7b63cd568d989269e34a9d4"} Mar 13 10:43:53.884542 master-0 kubenswrapper[7271]: I0313 10:43:53.884518 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:53.885720 master-0 kubenswrapper[7271]: I0313 10:43:53.885683 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" event={"ID":"6a5bf208-6131-44f5-b92e-6962af670a6c","Type":"ContainerStarted","Data":"2d864bc828901257862b939c2a039cb41544958dc7408d9e7962486322c60d58"} Mar 13 10:43:53.885782 master-0 kubenswrapper[7271]: I0313 10:43:53.885730 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" event={"ID":"6a5bf208-6131-44f5-b92e-6962af670a6c","Type":"ContainerStarted","Data":"75ded09f002aee5e524b9c490ee7d7119c38855f13e8560184884503b7c817ed"} Mar 13 10:43:53.885937 master-0 kubenswrapper[7271]: I0313 10:43:53.885917 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:53.887198 master-0 kubenswrapper[7271]: I0313 10:43:53.887174 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:53.887198 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:53.887198 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:53.887198 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:53.887381 master-0 kubenswrapper[7271]: I0313 10:43:53.887357 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:53.891286 master-0 kubenswrapper[7271]: I0313 10:43:53.891239 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:43:53.899812 master-0 kubenswrapper[7271]: I0313 10:43:53.899755 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" podStartSLOduration=2.899740939 podStartE2EDuration="2.899740939s" podCreationTimestamp="2026-03-13 10:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:43:53.89755595 +0000 UTC m=+488.424378350" watchObservedRunningTime="2026-03-13 10:43:53.899740939 +0000 UTC m=+488.426563329" Mar 13 10:43:53.916358 master-0 kubenswrapper[7271]: I0313 10:43:53.916260 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" podStartSLOduration=2.916240798 podStartE2EDuration="2.916240798s" podCreationTimestamp="2026-03-13 10:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:43:53.914239044 +0000 UTC m=+488.441061454" watchObservedRunningTime="2026-03-13 10:43:53.916240798 +0000 UTC m=+488.443063188" Mar 13 10:43:54.239330 master-0 kubenswrapper[7271]: I0313 10:43:54.239209 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:43:54.882216 master-0 kubenswrapper[7271]: I0313 10:43:54.882149 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:54.882216 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:54.882216 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:54.882216 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:54.882216 master-0 kubenswrapper[7271]: I0313 10:43:54.882214 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:55.883565 master-0 kubenswrapper[7271]: I0313 10:43:55.883499 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:55.883565 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:55.883565 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:55.883565 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:55.884386 master-0 kubenswrapper[7271]: I0313 10:43:55.884350 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:56.883183 master-0 kubenswrapper[7271]: I0313 10:43:56.883133 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:56.883183 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:56.883183 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:56.883183 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:56.883978 master-0 kubenswrapper[7271]: I0313 10:43:56.883205 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:57.875553 master-0 kubenswrapper[7271]: I0313 10:43:57.875384 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 13 10:43:57.877685 master-0 kubenswrapper[7271]: I0313 10:43:57.877639 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 10:43:57.881787 master-0 kubenswrapper[7271]: I0313 10:43:57.881691 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-75rf9" Mar 13 10:43:57.882455 master-0 kubenswrapper[7271]: I0313 10:43:57.882413 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 10:43:57.886289 master-0 kubenswrapper[7271]: I0313 10:43:57.886214 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:57.886289 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:57.886289 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:57.886289 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:57.887069 master-0 kubenswrapper[7271]: I0313 10:43:57.886322 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:57.887069 master-0 kubenswrapper[7271]: I0313 10:43:57.886918 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 13 10:43:57.917252 master-0 kubenswrapper[7271]: I0313 10:43:57.917149 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-var-lock\") pod \"installer-5-master-0\" (UID: \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 10:43:57.917852 master-0 kubenswrapper[7271]: I0313 10:43:57.917792 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 10:43:58.019069 master-0 kubenswrapper[7271]: I0313 10:43:58.019005 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 10:43:58.019069 master-0 kubenswrapper[7271]: I0313 10:43:58.019077 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-var-lock\") pod \"installer-5-master-0\" (UID: \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 10:43:58.019348 master-0 kubenswrapper[7271]: I0313 10:43:58.019107 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 10:43:58.019348 master-0 kubenswrapper[7271]: I0313 10:43:58.019126 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-kube-api-access\") pod \"installer-5-master-0\" (UID: \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 10:43:58.019348 master-0 kubenswrapper[7271]: I0313 10:43:58.019233 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-var-lock\") pod \"installer-5-master-0\" (UID: \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 10:43:58.120723 master-0 kubenswrapper[7271]: I0313 10:43:58.120643 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-kube-api-access\") pod \"installer-5-master-0\" (UID: \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 10:43:58.140110 master-0 kubenswrapper[7271]: I0313 10:43:58.139962 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-kube-api-access\") pod \"installer-5-master-0\" (UID: \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 10:43:58.207835 master-0 kubenswrapper[7271]: I0313 10:43:58.207735 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 10:43:58.613255 master-0 kubenswrapper[7271]: I0313 10:43:58.613191 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 13 10:43:58.619634 master-0 kubenswrapper[7271]: W0313 10:43:58.619503 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod05f7830b_51cc_45d2_bbb3_ac01eeed57ac.slice/crio-8806b35c314af20732bacfc49d8a1556a0a610503737d33085a256d12444c681 WatchSource:0}: Error finding container 8806b35c314af20732bacfc49d8a1556a0a610503737d33085a256d12444c681: Status 404 returned error can't find the container with id 8806b35c314af20732bacfc49d8a1556a0a610503737d33085a256d12444c681 Mar 13 10:43:58.883125 master-0 kubenswrapper[7271]: I0313 10:43:58.882978 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:58.883125 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:58.883125 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:58.883125 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:58.883125 master-0 kubenswrapper[7271]: I0313 10:43:58.883070 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:58.917425 master-0 kubenswrapper[7271]: I0313 10:43:58.917330 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"05f7830b-51cc-45d2-bbb3-ac01eeed57ac","Type":"ContainerStarted","Data":"8806b35c314af20732bacfc49d8a1556a0a610503737d33085a256d12444c681"} Mar 13 10:43:59.883246 master-0 kubenswrapper[7271]: I0313 10:43:59.883191 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:43:59.883246 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:43:59.883246 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:43:59.883246 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:43:59.883559 master-0 kubenswrapper[7271]: I0313 10:43:59.883256 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:43:59.924019 master-0 kubenswrapper[7271]: I0313 10:43:59.923955 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"05f7830b-51cc-45d2-bbb3-ac01eeed57ac","Type":"ContainerStarted","Data":"e30ddedef616e76982e7503ccdc6b701bfe5c6467184889999283ee9de5f7a92"} Mar 13 10:43:59.941931 master-0 kubenswrapper[7271]: I0313 10:43:59.941853 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=2.941829497 podStartE2EDuration="2.941829497s" podCreationTimestamp="2026-03-13 10:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:43:59.936146942 +0000 UTC m=+494.462969332" watchObservedRunningTime="2026-03-13 10:43:59.941829497 +0000 UTC m=+494.468651887" Mar 13 10:44:00.883130 master-0 kubenswrapper[7271]: I0313 10:44:00.883039 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:00.883130 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:00.883130 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:00.883130 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:00.883130 master-0 kubenswrapper[7271]: I0313 10:44:00.883132 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:01.883436 master-0 kubenswrapper[7271]: I0313 10:44:01.883341 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:01.883436 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:01.883436 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:01.883436 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:01.884069 master-0 kubenswrapper[7271]: I0313 10:44:01.883475 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:02.883168 master-0 kubenswrapper[7271]: I0313 10:44:02.883117 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:02.883168 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:02.883168 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:02.883168 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:02.883862 master-0 kubenswrapper[7271]: I0313 10:44:02.883183 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:03.035228 master-0 kubenswrapper[7271]: I0313 10:44:03.035141 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 10:44:03.035547 master-0 kubenswrapper[7271]: I0313 10:44:03.035398 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="2107b8fb-e707-4c48-af51-52dd046bf99b" containerName="installer" containerID="cri-o://bfd02001e9f5ee86443a478f0929467e6284c37877a3559705a401614451b035" gracePeriod=30 Mar 13 10:44:03.883241 master-0 kubenswrapper[7271]: I0313 10:44:03.883183 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:03.883241 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:03.883241 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:03.883241 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:03.883964 master-0 kubenswrapper[7271]: I0313 10:44:03.883255 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:04.883809 master-0 kubenswrapper[7271]: I0313 10:44:04.883711 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:04.883809 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:04.883809 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:04.883809 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:04.884465 master-0 kubenswrapper[7271]: I0313 10:44:04.883859 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:05.437601 master-0 kubenswrapper[7271]: I0313 10:44:05.437487 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 10:44:05.438405 master-0 kubenswrapper[7271]: I0313 10:44:05.438376 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 10:44:05.452038 master-0 kubenswrapper[7271]: I0313 10:44:05.452001 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 10:44:05.637146 master-0 kubenswrapper[7271]: I0313 10:44:05.636384 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 10:44:05.637146 master-0 kubenswrapper[7271]: I0313 10:44:05.636473 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-var-lock\") pod \"installer-3-master-0\" (UID: \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 10:44:05.637146 master-0 kubenswrapper[7271]: I0313 10:44:05.636509 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 10:44:05.739044 master-0 kubenswrapper[7271]: I0313 10:44:05.738733 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 10:44:05.739432 master-0 kubenswrapper[7271]: I0313 10:44:05.739391 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-var-lock\") pod \"installer-3-master-0\" (UID: \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 10:44:05.739683 master-0 kubenswrapper[7271]: I0313 10:44:05.739539 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-var-lock\") pod \"installer-3-master-0\" (UID: \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 10:44:05.739984 master-0 kubenswrapper[7271]: I0313 10:44:05.739946 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 10:44:05.740118 master-0 kubenswrapper[7271]: I0313 10:44:05.740097 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 10:44:05.756838 master-0 kubenswrapper[7271]: I0313 10:44:05.756769 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 10:44:05.759898 master-0 kubenswrapper[7271]: I0313 10:44:05.759846 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 10:44:05.883889 master-0 kubenswrapper[7271]: I0313 10:44:05.883846 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:05.883889 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:05.883889 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:05.883889 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:05.884533 master-0 kubenswrapper[7271]: I0313 10:44:05.883908 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:06.153075 master-0 kubenswrapper[7271]: I0313 10:44:06.152982 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 10:44:06.160148 master-0 kubenswrapper[7271]: W0313 10:44:06.160078 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode5a41bd7_f3fe_4c5b_88fd_ddbbebcb440c.slice/crio-5b6e436b9cc09a918e66ed32313004ed8edc16d26f739da2414fd5c6347334d8 WatchSource:0}: Error finding container 5b6e436b9cc09a918e66ed32313004ed8edc16d26f739da2414fd5c6347334d8: Status 404 returned error can't find the container with id 5b6e436b9cc09a918e66ed32313004ed8edc16d26f739da2414fd5c6347334d8 Mar 13 10:44:06.882905 master-0 kubenswrapper[7271]: I0313 10:44:06.882844 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:06.882905 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:06.882905 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:06.882905 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:06.883199 master-0 kubenswrapper[7271]: I0313 10:44:06.882917 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:06.968345 master-0 kubenswrapper[7271]: I0313 10:44:06.968287 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c","Type":"ContainerStarted","Data":"3c84db0498138b2ad19628a630c45e3de3b287d4abdd1560f1b74b129ad3abaf"} Mar 13 10:44:06.968345 master-0 kubenswrapper[7271]: I0313 10:44:06.968346 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c","Type":"ContainerStarted","Data":"5b6e436b9cc09a918e66ed32313004ed8edc16d26f739da2414fd5c6347334d8"} Mar 13 10:44:06.989572 master-0 kubenswrapper[7271]: I0313 10:44:06.989454 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=1.989423442 podStartE2EDuration="1.989423442s" podCreationTimestamp="2026-03-13 10:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:44:06.984231991 +0000 UTC m=+501.511054391" watchObservedRunningTime="2026-03-13 10:44:06.989423442 +0000 UTC m=+501.516245872" Mar 13 10:44:07.882906 master-0 kubenswrapper[7271]: I0313 10:44:07.882846 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:07.882906 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:07.882906 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:07.882906 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:07.883360 master-0 kubenswrapper[7271]: I0313 10:44:07.882909 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:08.882411 master-0 kubenswrapper[7271]: I0313 10:44:08.882346 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:08.882411 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:08.882411 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:08.882411 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:08.882998 master-0 kubenswrapper[7271]: I0313 10:44:08.882441 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:09.883232 master-0 kubenswrapper[7271]: I0313 10:44:09.883167 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:09.883232 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:09.883232 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:09.883232 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:09.884163 master-0 kubenswrapper[7271]: I0313 10:44:09.883246 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:10.883068 master-0 kubenswrapper[7271]: I0313 10:44:10.883000 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:10.883068 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:10.883068 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:10.883068 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:10.883068 master-0 kubenswrapper[7271]: I0313 10:44:10.883068 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:11.883740 master-0 kubenswrapper[7271]: I0313 10:44:11.883611 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:11.883740 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:11.883740 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:11.883740 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:11.883740 master-0 kubenswrapper[7271]: I0313 10:44:11.883681 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:12.883053 master-0 kubenswrapper[7271]: I0313 10:44:12.882995 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:12.883053 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:12.883053 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:12.883053 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:12.883358 master-0 kubenswrapper[7271]: I0313 10:44:12.883060 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:13.882990 master-0 kubenswrapper[7271]: I0313 10:44:13.882916 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:13.882990 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:13.882990 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:13.882990 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:13.883700 master-0 kubenswrapper[7271]: I0313 10:44:13.882995 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:14.882429 master-0 kubenswrapper[7271]: I0313 10:44:14.882379 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:14.882429 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:14.882429 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:14.882429 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:14.882845 master-0 kubenswrapper[7271]: I0313 10:44:14.882810 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:15.882430 master-0 kubenswrapper[7271]: I0313 10:44:15.882345 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:15.882430 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:15.882430 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:15.882430 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:15.883281 master-0 kubenswrapper[7271]: I0313 10:44:15.882436 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:16.882576 master-0 kubenswrapper[7271]: I0313 10:44:16.882511 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:16.882576 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:16.882576 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:16.882576 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:16.883222 master-0 kubenswrapper[7271]: I0313 10:44:16.882609 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:17.883120 master-0 kubenswrapper[7271]: I0313 10:44:17.883061 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:17.883120 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:17.883120 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:17.883120 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:17.883739 master-0 kubenswrapper[7271]: I0313 10:44:17.883140 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:18.882906 master-0 kubenswrapper[7271]: I0313 10:44:18.882798 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:18.882906 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:18.882906 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:18.882906 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:18.882906 master-0 kubenswrapper[7271]: I0313 10:44:18.882870 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:19.883037 master-0 kubenswrapper[7271]: I0313 10:44:19.882959 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:19.883037 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:19.883037 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:19.883037 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:19.884017 master-0 kubenswrapper[7271]: I0313 10:44:19.883101 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:20.854482 master-0 kubenswrapper[7271]: I0313 10:44:20.854430 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_2107b8fb-e707-4c48-af51-52dd046bf99b/installer/0.log" Mar 13 10:44:20.854910 master-0 kubenswrapper[7271]: I0313 10:44:20.854866 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 10:44:20.872803 master-0 kubenswrapper[7271]: I0313 10:44:20.868891 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2107b8fb-e707-4c48-af51-52dd046bf99b-kubelet-dir\") pod \"2107b8fb-e707-4c48-af51-52dd046bf99b\" (UID: \"2107b8fb-e707-4c48-af51-52dd046bf99b\") " Mar 13 10:44:20.872803 master-0 kubenswrapper[7271]: I0313 10:44:20.869003 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2107b8fb-e707-4c48-af51-52dd046bf99b-var-lock\") pod \"2107b8fb-e707-4c48-af51-52dd046bf99b\" (UID: \"2107b8fb-e707-4c48-af51-52dd046bf99b\") " Mar 13 10:44:20.872803 master-0 kubenswrapper[7271]: I0313 10:44:20.869014 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2107b8fb-e707-4c48-af51-52dd046bf99b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2107b8fb-e707-4c48-af51-52dd046bf99b" (UID: "2107b8fb-e707-4c48-af51-52dd046bf99b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:44:20.872803 master-0 kubenswrapper[7271]: I0313 10:44:20.869155 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2107b8fb-e707-4c48-af51-52dd046bf99b-kube-api-access\") pod \"2107b8fb-e707-4c48-af51-52dd046bf99b\" (UID: \"2107b8fb-e707-4c48-af51-52dd046bf99b\") " Mar 13 10:44:20.872803 master-0 kubenswrapper[7271]: I0313 10:44:20.869144 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2107b8fb-e707-4c48-af51-52dd046bf99b-var-lock" (OuterVolumeSpecName: "var-lock") pod "2107b8fb-e707-4c48-af51-52dd046bf99b" (UID: "2107b8fb-e707-4c48-af51-52dd046bf99b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:44:20.872803 master-0 kubenswrapper[7271]: I0313 10:44:20.869795 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2107b8fb-e707-4c48-af51-52dd046bf99b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:20.872803 master-0 kubenswrapper[7271]: I0313 10:44:20.869838 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2107b8fb-e707-4c48-af51-52dd046bf99b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:20.874403 master-0 kubenswrapper[7271]: I0313 10:44:20.873958 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2107b8fb-e707-4c48-af51-52dd046bf99b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2107b8fb-e707-4c48-af51-52dd046bf99b" (UID: "2107b8fb-e707-4c48-af51-52dd046bf99b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:44:20.884368 master-0 kubenswrapper[7271]: I0313 10:44:20.884144 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:20.884368 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:20.884368 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:20.884368 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:20.884368 master-0 kubenswrapper[7271]: I0313 10:44:20.884209 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:20.970932 master-0 kubenswrapper[7271]: I0313 10:44:20.970871 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2107b8fb-e707-4c48-af51-52dd046bf99b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:21.052981 master-0 kubenswrapper[7271]: I0313 10:44:21.052929 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_2107b8fb-e707-4c48-af51-52dd046bf99b/installer/0.log" Mar 13 10:44:21.052981 master-0 kubenswrapper[7271]: I0313 10:44:21.052984 7271 generic.go:334] "Generic (PLEG): container finished" podID="2107b8fb-e707-4c48-af51-52dd046bf99b" containerID="bfd02001e9f5ee86443a478f0929467e6284c37877a3559705a401614451b035" exitCode=1 Mar 13 10:44:21.053250 master-0 kubenswrapper[7271]: I0313 10:44:21.053016 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"2107b8fb-e707-4c48-af51-52dd046bf99b","Type":"ContainerDied","Data":"bfd02001e9f5ee86443a478f0929467e6284c37877a3559705a401614451b035"} Mar 13 10:44:21.053250 master-0 kubenswrapper[7271]: I0313 10:44:21.053046 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"2107b8fb-e707-4c48-af51-52dd046bf99b","Type":"ContainerDied","Data":"0a2949eb2340acf4c82bc49edd83681169d3e69b3e85e3a345ba2e00cc6ab753"} Mar 13 10:44:21.053250 master-0 kubenswrapper[7271]: I0313 10:44:21.053062 7271 scope.go:117] "RemoveContainer" containerID="bfd02001e9f5ee86443a478f0929467e6284c37877a3559705a401614451b035" Mar 13 10:44:21.053250 master-0 kubenswrapper[7271]: I0313 10:44:21.053164 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 10:44:21.070767 master-0 kubenswrapper[7271]: I0313 10:44:21.070717 7271 scope.go:117] "RemoveContainer" containerID="bfd02001e9f5ee86443a478f0929467e6284c37877a3559705a401614451b035" Mar 13 10:44:21.071948 master-0 kubenswrapper[7271]: E0313 10:44:21.071907 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd02001e9f5ee86443a478f0929467e6284c37877a3559705a401614451b035\": container with ID starting with bfd02001e9f5ee86443a478f0929467e6284c37877a3559705a401614451b035 not found: ID does not exist" containerID="bfd02001e9f5ee86443a478f0929467e6284c37877a3559705a401614451b035" Mar 13 10:44:21.072034 master-0 kubenswrapper[7271]: I0313 10:44:21.071966 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd02001e9f5ee86443a478f0929467e6284c37877a3559705a401614451b035"} err="failed to get container status \"bfd02001e9f5ee86443a478f0929467e6284c37877a3559705a401614451b035\": rpc error: code = NotFound desc = could not find container \"bfd02001e9f5ee86443a478f0929467e6284c37877a3559705a401614451b035\": container with ID starting with bfd02001e9f5ee86443a478f0929467e6284c37877a3559705a401614451b035 not found: ID does not exist" Mar 13 10:44:21.089766 master-0 kubenswrapper[7271]: I0313 10:44:21.089704 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 10:44:21.092452 master-0 kubenswrapper[7271]: I0313 10:44:21.092375 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 10:44:21.653234 master-0 kubenswrapper[7271]: I0313 10:44:21.653179 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2107b8fb-e707-4c48-af51-52dd046bf99b" path="/var/lib/kubelet/pods/2107b8fb-e707-4c48-af51-52dd046bf99b/volumes" Mar 13 10:44:21.847568 master-0 kubenswrapper[7271]: I0313 10:44:21.847506 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vpnmf"] Mar 13 10:44:21.847895 master-0 kubenswrapper[7271]: E0313 10:44:21.847830 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2107b8fb-e707-4c48-af51-52dd046bf99b" containerName="installer" Mar 13 10:44:21.847895 master-0 kubenswrapper[7271]: I0313 10:44:21.847846 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="2107b8fb-e707-4c48-af51-52dd046bf99b" containerName="installer" Mar 13 10:44:21.847976 master-0 kubenswrapper[7271]: I0313 10:44:21.847965 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="2107b8fb-e707-4c48-af51-52dd046bf99b" containerName="installer" Mar 13 10:44:21.848465 master-0 kubenswrapper[7271]: I0313 10:44:21.848436 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:21.850624 master-0 kubenswrapper[7271]: I0313 10:44:21.850453 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 13 10:44:21.850896 master-0 kubenswrapper[7271]: I0313 10:44:21.850882 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-w85p5" Mar 13 10:44:21.882779 master-0 kubenswrapper[7271]: I0313 10:44:21.882731 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:21.882779 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:21.882779 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:21.882779 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:21.883069 master-0 kubenswrapper[7271]: I0313 10:44:21.882801 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:21.883955 master-0 kubenswrapper[7271]: I0313 10:44:21.883889 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d1100866-59a5-4653-b8eb-7945515ae057-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vpnmf\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:21.884012 master-0 kubenswrapper[7271]: I0313 10:44:21.883980 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d1100866-59a5-4653-b8eb-7945515ae057-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vpnmf\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:21.884119 master-0 kubenswrapper[7271]: I0313 10:44:21.884081 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d1100866-59a5-4653-b8eb-7945515ae057-ready\") pod \"cni-sysctl-allowlist-ds-vpnmf\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:21.884425 master-0 kubenswrapper[7271]: I0313 10:44:21.884363 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvr2l\" (UniqueName: \"kubernetes.io/projected/d1100866-59a5-4653-b8eb-7945515ae057-kube-api-access-kvr2l\") pod \"cni-sysctl-allowlist-ds-vpnmf\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:21.985197 master-0 kubenswrapper[7271]: I0313 10:44:21.985079 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d1100866-59a5-4653-b8eb-7945515ae057-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vpnmf\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:21.985711 master-0 kubenswrapper[7271]: I0313 10:44:21.985663 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d1100866-59a5-4653-b8eb-7945515ae057-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vpnmf\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:21.985747 master-0 kubenswrapper[7271]: I0313 10:44:21.985244 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d1100866-59a5-4653-b8eb-7945515ae057-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vpnmf\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:21.985747 master-0 kubenswrapper[7271]: I0313 10:44:21.985738 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d1100866-59a5-4653-b8eb-7945515ae057-ready\") pod \"cni-sysctl-allowlist-ds-vpnmf\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:21.985821 master-0 kubenswrapper[7271]: I0313 10:44:21.985802 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvr2l\" (UniqueName: \"kubernetes.io/projected/d1100866-59a5-4653-b8eb-7945515ae057-kube-api-access-kvr2l\") pod \"cni-sysctl-allowlist-ds-vpnmf\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:21.986388 master-0 kubenswrapper[7271]: I0313 10:44:21.986363 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d1100866-59a5-4653-b8eb-7945515ae057-ready\") pod \"cni-sysctl-allowlist-ds-vpnmf\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:21.986432 master-0 kubenswrapper[7271]: I0313 10:44:21.986391 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d1100866-59a5-4653-b8eb-7945515ae057-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vpnmf\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:22.001264 master-0 kubenswrapper[7271]: I0313 10:44:22.001207 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvr2l\" (UniqueName: \"kubernetes.io/projected/d1100866-59a5-4653-b8eb-7945515ae057-kube-api-access-kvr2l\") pod \"cni-sysctl-allowlist-ds-vpnmf\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:22.167750 master-0 kubenswrapper[7271]: I0313 10:44:22.167687 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:22.187157 master-0 kubenswrapper[7271]: W0313 10:44:22.187092 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1100866_59a5_4653_b8eb_7945515ae057.slice/crio-ae954fdd1298594ecbfea8f7764251c1b6f5d4b103893590537173967636deb0 WatchSource:0}: Error finding container ae954fdd1298594ecbfea8f7764251c1b6f5d4b103893590537173967636deb0: Status 404 returned error can't find the container with id ae954fdd1298594ecbfea8f7764251c1b6f5d4b103893590537173967636deb0 Mar 13 10:44:22.883022 master-0 kubenswrapper[7271]: I0313 10:44:22.882915 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:22.883022 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:22.883022 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:22.883022 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:22.883022 master-0 kubenswrapper[7271]: I0313 10:44:22.882991 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:23.066486 master-0 kubenswrapper[7271]: I0313 10:44:23.066412 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" event={"ID":"d1100866-59a5-4653-b8eb-7945515ae057","Type":"ContainerStarted","Data":"77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034"} Mar 13 10:44:23.066486 master-0 kubenswrapper[7271]: I0313 10:44:23.066473 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" event={"ID":"d1100866-59a5-4653-b8eb-7945515ae057","Type":"ContainerStarted","Data":"ae954fdd1298594ecbfea8f7764251c1b6f5d4b103893590537173967636deb0"} Mar 13 10:44:23.067191 master-0 kubenswrapper[7271]: I0313 10:44:23.066878 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:23.081366 master-0 kubenswrapper[7271]: I0313 10:44:23.081281 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" podStartSLOduration=2.081261914 podStartE2EDuration="2.081261914s" podCreationTimestamp="2026-03-13 10:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:44:23.077722308 +0000 UTC m=+517.604544698" watchObservedRunningTime="2026-03-13 10:44:23.081261914 +0000 UTC m=+517.608084304" Mar 13 10:44:23.095562 master-0 kubenswrapper[7271]: I0313 10:44:23.095475 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:23.871622 master-0 kubenswrapper[7271]: I0313 10:44:23.869494 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vpnmf"] Mar 13 10:44:23.882529 master-0 kubenswrapper[7271]: I0313 10:44:23.882491 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:23.882529 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:23.882529 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:23.882529 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:23.882855 master-0 kubenswrapper[7271]: I0313 10:44:23.882554 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:24.883568 master-0 kubenswrapper[7271]: I0313 10:44:24.883499 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:24.883568 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:24.883568 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:24.883568 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:24.884327 master-0 kubenswrapper[7271]: I0313 10:44:24.883580 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:25.079122 master-0 kubenswrapper[7271]: I0313 10:44:25.079029 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" podUID="d1100866-59a5-4653-b8eb-7945515ae057" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034" gracePeriod=30 Mar 13 10:44:25.882611 master-0 kubenswrapper[7271]: I0313 10:44:25.882508 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:25.882611 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:25.882611 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:25.882611 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:25.882994 master-0 kubenswrapper[7271]: I0313 10:44:25.882679 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:26.676130 master-0 kubenswrapper[7271]: I0313 10:44:26.676060 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 13 10:44:26.677362 master-0 kubenswrapper[7271]: I0313 10:44:26.677328 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 10:44:26.680784 master-0 kubenswrapper[7271]: I0313 10:44:26.680709 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 13 10:44:26.680931 master-0 kubenswrapper[7271]: I0313 10:44:26.680815 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-vl8rr" Mar 13 10:44:26.693186 master-0 kubenswrapper[7271]: I0313 10:44:26.693078 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 13 10:44:26.744031 master-0 kubenswrapper[7271]: I0313 10:44:26.743938 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\") " pod="openshift-etcd/installer-2-master-0" Mar 13 10:44:26.744359 master-0 kubenswrapper[7271]: I0313 10:44:26.744112 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-var-lock\") pod \"installer-2-master-0\" (UID: \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\") " pod="openshift-etcd/installer-2-master-0" Mar 13 10:44:26.744359 master-0 kubenswrapper[7271]: I0313 10:44:26.744304 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\") " pod="openshift-etcd/installer-2-master-0" Mar 13 10:44:26.845554 master-0 kubenswrapper[7271]: I0313 10:44:26.845471 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\") " pod="openshift-etcd/installer-2-master-0" Mar 13 10:44:26.845554 master-0 kubenswrapper[7271]: I0313 10:44:26.845574 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\") " pod="openshift-etcd/installer-2-master-0" Mar 13 10:44:26.846001 master-0 kubenswrapper[7271]: I0313 10:44:26.845668 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-var-lock\") pod \"installer-2-master-0\" (UID: \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\") " pod="openshift-etcd/installer-2-master-0" Mar 13 10:44:26.846001 master-0 kubenswrapper[7271]: I0313 10:44:26.845678 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\") " pod="openshift-etcd/installer-2-master-0" Mar 13 10:44:26.846001 master-0 kubenswrapper[7271]: I0313 10:44:26.845799 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-var-lock\") pod \"installer-2-master-0\" (UID: \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\") " pod="openshift-etcd/installer-2-master-0" Mar 13 10:44:26.864936 master-0 kubenswrapper[7271]: I0313 10:44:26.864772 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\") " pod="openshift-etcd/installer-2-master-0" Mar 13 10:44:26.883670 master-0 kubenswrapper[7271]: I0313 10:44:26.883605 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:26.883670 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:26.883670 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:26.883670 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:26.883906 master-0 kubenswrapper[7271]: I0313 10:44:26.883696 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:27.006826 master-0 kubenswrapper[7271]: I0313 10:44:27.006664 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 10:44:27.418686 master-0 kubenswrapper[7271]: I0313 10:44:27.416109 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 13 10:44:27.426688 master-0 kubenswrapper[7271]: W0313 10:44:27.426628 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1769d48d_7ef0_48ee_9b7d_b46151ae5df6.slice/crio-0edd269bc9bfd58457b3b88ab218fa96e34778af571fb8288c4d56256e1a1e4d WatchSource:0}: Error finding container 0edd269bc9bfd58457b3b88ab218fa96e34778af571fb8288c4d56256e1a1e4d: Status 404 returned error can't find the container with id 0edd269bc9bfd58457b3b88ab218fa96e34778af571fb8288c4d56256e1a1e4d Mar 13 10:44:27.882664 master-0 kubenswrapper[7271]: I0313 10:44:27.882363 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:27.882664 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:27.882664 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:27.882664 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:27.882664 master-0 kubenswrapper[7271]: I0313 10:44:27.882481 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:28.100032 master-0 kubenswrapper[7271]: I0313 10:44:28.099947 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"1769d48d-7ef0-48ee-9b7d-b46151ae5df6","Type":"ContainerStarted","Data":"5399579cbf50883dcc4aa7699616e64f69ad85ad80602aae96557b44afc05a5a"} Mar 13 10:44:28.100032 master-0 kubenswrapper[7271]: I0313 10:44:28.100014 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"1769d48d-7ef0-48ee-9b7d-b46151ae5df6","Type":"ContainerStarted","Data":"0edd269bc9bfd58457b3b88ab218fa96e34778af571fb8288c4d56256e1a1e4d"} Mar 13 10:44:28.121868 master-0 kubenswrapper[7271]: I0313 10:44:28.121723 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.12170416 podStartE2EDuration="2.12170416s" podCreationTimestamp="2026-03-13 10:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:44:28.116839427 +0000 UTC m=+522.643661817" watchObservedRunningTime="2026-03-13 10:44:28.12170416 +0000 UTC m=+522.648526550" Mar 13 10:44:28.884212 master-0 kubenswrapper[7271]: I0313 10:44:28.884083 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:28.884212 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:28.884212 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:28.884212 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:28.885354 master-0 kubenswrapper[7271]: I0313 10:44:28.884248 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:29.884539 master-0 kubenswrapper[7271]: I0313 10:44:29.884429 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:29.884539 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:29.884539 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:29.884539 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:29.884539 master-0 kubenswrapper[7271]: I0313 10:44:29.884512 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:30.048854 master-0 kubenswrapper[7271]: I0313 10:44:30.048795 7271 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 10:44:30.049058 master-0 kubenswrapper[7271]: I0313 10:44:30.049019 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" containerID="cri-o://e29c9e8859ea50a213d1056d538b6a3cc96cdadb35b68c7127f1a2cbb6be6418" gracePeriod=30 Mar 13 10:44:30.050184 master-0 kubenswrapper[7271]: I0313 10:44:30.049494 7271 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 10:44:30.050184 master-0 kubenswrapper[7271]: E0313 10:44:30.049998 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 10:44:30.050184 master-0 kubenswrapper[7271]: I0313 10:44:30.050012 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 10:44:30.050184 master-0 kubenswrapper[7271]: E0313 10:44:30.050025 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 10:44:30.050184 master-0 kubenswrapper[7271]: I0313 10:44:30.050030 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 10:44:30.050184 master-0 kubenswrapper[7271]: I0313 10:44:30.050160 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 10:44:30.050434 master-0 kubenswrapper[7271]: I0313 10:44:30.050383 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 10:44:30.051219 master-0 kubenswrapper[7271]: I0313 10:44:30.051194 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:44:30.082302 master-0 kubenswrapper[7271]: I0313 10:44:30.082239 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 10:44:30.196568 master-0 kubenswrapper[7271]: I0313 10:44:30.196502 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:44:30.196855 master-0 kubenswrapper[7271]: I0313 10:44:30.196799 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:44:30.219025 master-0 kubenswrapper[7271]: I0313 10:44:30.218978 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:44:30.247952 master-0 kubenswrapper[7271]: I0313 10:44:30.247854 7271 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="46c22bb5-2b6e-4123-8cf1-b715c28e4ca2" Mar 13 10:44:30.299359 master-0 kubenswrapper[7271]: I0313 10:44:30.297833 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 13 10:44:30.299359 master-0 kubenswrapper[7271]: I0313 10:44:30.297912 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 13 10:44:30.299359 master-0 kubenswrapper[7271]: I0313 10:44:30.298256 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets" (OuterVolumeSpecName: "secrets") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:44:30.299359 master-0 kubenswrapper[7271]: I0313 10:44:30.298387 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs" (OuterVolumeSpecName: "logs") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:44:30.299359 master-0 kubenswrapper[7271]: I0313 10:44:30.298653 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:44:30.299359 master-0 kubenswrapper[7271]: I0313 10:44:30.298728 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:44:30.299359 master-0 kubenswrapper[7271]: I0313 10:44:30.298774 7271 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:30.299359 master-0 kubenswrapper[7271]: I0313 10:44:30.298786 7271 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:30.299359 master-0 kubenswrapper[7271]: I0313 10:44:30.298823 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:44:30.299359 master-0 kubenswrapper[7271]: I0313 10:44:30.298852 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:44:30.378244 master-0 kubenswrapper[7271]: I0313 10:44:30.378165 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:44:30.397968 master-0 kubenswrapper[7271]: W0313 10:44:30.397911 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1453f6461bf5d599ad65a4656343ee91.slice/crio-ae1044e0f3af37c165014c1eb642d1f8fde612b9df43f0ab3fcdadafb3b43db5 WatchSource:0}: Error finding container ae1044e0f3af37c165014c1eb642d1f8fde612b9df43f0ab3fcdadafb3b43db5: Status 404 returned error can't find the container with id ae1044e0f3af37c165014c1eb642d1f8fde612b9df43f0ab3fcdadafb3b43db5 Mar 13 10:44:30.593659 master-0 kubenswrapper[7271]: I0313 10:44:30.593618 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb"] Mar 13 10:44:30.594010 master-0 kubenswrapper[7271]: I0313 10:44:30.593968 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" podUID="6a5bf208-6131-44f5-b92e-6962af670a6c" containerName="controller-manager" containerID="cri-o://2d864bc828901257862b939c2a039cb41544958dc7408d9e7962486322c60d58" gracePeriod=30 Mar 13 10:44:30.615899 master-0 kubenswrapper[7271]: I0313 10:44:30.615786 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq"] Mar 13 10:44:30.616152 master-0 kubenswrapper[7271]: I0313 10:44:30.616026 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" podUID="928b1766-6bac-4fba-a982-42b050581bd0" containerName="route-controller-manager" containerID="cri-o://9ca3419fe1bea0f004715b5f12f9d37638ad8ed87d0137b29fcb97cd44dcacd7" gracePeriod=30 Mar 13 10:44:30.882870 master-0 kubenswrapper[7271]: I0313 10:44:30.882824 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:30.882870 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:30.882870 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:30.882870 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:30.883149 master-0 kubenswrapper[7271]: I0313 10:44:30.882883 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:31.016170 master-0 kubenswrapper[7271]: I0313 10:44:31.016119 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:44:31.049962 master-0 kubenswrapper[7271]: I0313 10:44:31.044112 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:44:31.117883 master-0 kubenswrapper[7271]: I0313 10:44:31.117820 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j7m9\" (UniqueName: \"kubernetes.io/projected/6a5bf208-6131-44f5-b92e-6962af670a6c-kube-api-access-4j7m9\") pod \"6a5bf208-6131-44f5-b92e-6962af670a6c\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " Mar 13 10:44:31.117883 master-0 kubenswrapper[7271]: I0313 10:44:31.117874 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928b1766-6bac-4fba-a982-42b050581bd0-client-ca\") pod \"928b1766-6bac-4fba-a982-42b050581bd0\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " Mar 13 10:44:31.119653 master-0 kubenswrapper[7271]: I0313 10:44:31.117915 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-config\") pod \"6a5bf208-6131-44f5-b92e-6962af670a6c\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " Mar 13 10:44:31.119653 master-0 kubenswrapper[7271]: I0313 10:44:31.117955 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a5bf208-6131-44f5-b92e-6962af670a6c-serving-cert\") pod \"6a5bf208-6131-44f5-b92e-6962af670a6c\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " Mar 13 10:44:31.119653 master-0 kubenswrapper[7271]: I0313 10:44:31.118007 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928b1766-6bac-4fba-a982-42b050581bd0-serving-cert\") pod \"928b1766-6bac-4fba-a982-42b050581bd0\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " Mar 13 10:44:31.119653 master-0 kubenswrapper[7271]: I0313 10:44:31.118060 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4mbz\" (UniqueName: \"kubernetes.io/projected/928b1766-6bac-4fba-a982-42b050581bd0-kube-api-access-r4mbz\") pod \"928b1766-6bac-4fba-a982-42b050581bd0\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " Mar 13 10:44:31.119653 master-0 kubenswrapper[7271]: I0313 10:44:31.118087 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928b1766-6bac-4fba-a982-42b050581bd0-config\") pod \"928b1766-6bac-4fba-a982-42b050581bd0\" (UID: \"928b1766-6bac-4fba-a982-42b050581bd0\") " Mar 13 10:44:31.119653 master-0 kubenswrapper[7271]: I0313 10:44:31.118111 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-client-ca\") pod \"6a5bf208-6131-44f5-b92e-6962af670a6c\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " Mar 13 10:44:31.119653 master-0 kubenswrapper[7271]: I0313 10:44:31.118137 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-proxy-ca-bundles\") pod \"6a5bf208-6131-44f5-b92e-6962af670a6c\" (UID: \"6a5bf208-6131-44f5-b92e-6962af670a6c\") " Mar 13 10:44:31.119653 master-0 kubenswrapper[7271]: I0313 10:44:31.119408 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6a5bf208-6131-44f5-b92e-6962af670a6c" (UID: "6a5bf208-6131-44f5-b92e-6962af670a6c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:44:31.123545 master-0 kubenswrapper[7271]: I0313 10:44:31.121154 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a5bf208-6131-44f5-b92e-6962af670a6c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6a5bf208-6131-44f5-b92e-6962af670a6c" (UID: "6a5bf208-6131-44f5-b92e-6962af670a6c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:44:31.123545 master-0 kubenswrapper[7271]: I0313 10:44:31.121648 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-client-ca" (OuterVolumeSpecName: "client-ca") pod "6a5bf208-6131-44f5-b92e-6962af670a6c" (UID: "6a5bf208-6131-44f5-b92e-6962af670a6c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:44:31.123545 master-0 kubenswrapper[7271]: I0313 10:44:31.121685 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-config" (OuterVolumeSpecName: "config") pod "6a5bf208-6131-44f5-b92e-6962af670a6c" (UID: "6a5bf208-6131-44f5-b92e-6962af670a6c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:44:31.123545 master-0 kubenswrapper[7271]: I0313 10:44:31.121710 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/928b1766-6bac-4fba-a982-42b050581bd0-config" (OuterVolumeSpecName: "config") pod "928b1766-6bac-4fba-a982-42b050581bd0" (UID: "928b1766-6bac-4fba-a982-42b050581bd0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:44:31.123545 master-0 kubenswrapper[7271]: I0313 10:44:31.122394 7271 generic.go:334] "Generic (PLEG): container finished" podID="6a5bf208-6131-44f5-b92e-6962af670a6c" containerID="2d864bc828901257862b939c2a039cb41544958dc7408d9e7962486322c60d58" exitCode=0 Mar 13 10:44:31.123545 master-0 kubenswrapper[7271]: I0313 10:44:31.122466 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" Mar 13 10:44:31.123545 master-0 kubenswrapper[7271]: I0313 10:44:31.122504 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" event={"ID":"6a5bf208-6131-44f5-b92e-6962af670a6c","Type":"ContainerDied","Data":"2d864bc828901257862b939c2a039cb41544958dc7408d9e7962486322c60d58"} Mar 13 10:44:31.123545 master-0 kubenswrapper[7271]: I0313 10:44:31.122561 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb" event={"ID":"6a5bf208-6131-44f5-b92e-6962af670a6c","Type":"ContainerDied","Data":"75ded09f002aee5e524b9c490ee7d7119c38855f13e8560184884503b7c817ed"} Mar 13 10:44:31.123545 master-0 kubenswrapper[7271]: I0313 10:44:31.122597 7271 scope.go:117] "RemoveContainer" containerID="2d864bc828901257862b939c2a039cb41544958dc7408d9e7962486322c60d58" Mar 13 10:44:31.124401 master-0 kubenswrapper[7271]: I0313 10:44:31.123799 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928b1766-6bac-4fba-a982-42b050581bd0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "928b1766-6bac-4fba-a982-42b050581bd0" (UID: "928b1766-6bac-4fba-a982-42b050581bd0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:44:31.125618 master-0 kubenswrapper[7271]: I0313 10:44:31.124414 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928b1766-6bac-4fba-a982-42b050581bd0-kube-api-access-r4mbz" (OuterVolumeSpecName: "kube-api-access-r4mbz") pod "928b1766-6bac-4fba-a982-42b050581bd0" (UID: "928b1766-6bac-4fba-a982-42b050581bd0"). InnerVolumeSpecName "kube-api-access-r4mbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:44:31.125618 master-0 kubenswrapper[7271]: I0313 10:44:31.124798 7271 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952" exitCode=0 Mar 13 10:44:31.125618 master-0 kubenswrapper[7271]: I0313 10:44:31.124865 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerDied","Data":"bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952"} Mar 13 10:44:31.125618 master-0 kubenswrapper[7271]: I0313 10:44:31.124887 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"ae1044e0f3af37c165014c1eb642d1f8fde612b9df43f0ab3fcdadafb3b43db5"} Mar 13 10:44:31.126682 master-0 kubenswrapper[7271]: I0313 10:44:31.126359 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/928b1766-6bac-4fba-a982-42b050581bd0-client-ca" (OuterVolumeSpecName: "client-ca") pod "928b1766-6bac-4fba-a982-42b050581bd0" (UID: "928b1766-6bac-4fba-a982-42b050581bd0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:44:31.131654 master-0 kubenswrapper[7271]: I0313 10:44:31.129013 7271 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="e29c9e8859ea50a213d1056d538b6a3cc96cdadb35b68c7127f1a2cbb6be6418" exitCode=0 Mar 13 10:44:31.131654 master-0 kubenswrapper[7271]: I0313 10:44:31.129125 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a800ac5d1779f65790cfc04fd054cd45e77032d228f479c2dc831649fa5ed50" Mar 13 10:44:31.131654 master-0 kubenswrapper[7271]: I0313 10:44:31.129224 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 10:44:31.131654 master-0 kubenswrapper[7271]: I0313 10:44:31.130069 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a5bf208-6131-44f5-b92e-6962af670a6c-kube-api-access-4j7m9" (OuterVolumeSpecName: "kube-api-access-4j7m9") pod "6a5bf208-6131-44f5-b92e-6962af670a6c" (UID: "6a5bf208-6131-44f5-b92e-6962af670a6c"). InnerVolumeSpecName "kube-api-access-4j7m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:44:31.131654 master-0 kubenswrapper[7271]: I0313 10:44:31.131103 7271 generic.go:334] "Generic (PLEG): container finished" podID="928b1766-6bac-4fba-a982-42b050581bd0" containerID="9ca3419fe1bea0f004715b5f12f9d37638ad8ed87d0137b29fcb97cd44dcacd7" exitCode=0 Mar 13 10:44:31.131654 master-0 kubenswrapper[7271]: I0313 10:44:31.131149 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" Mar 13 10:44:31.131654 master-0 kubenswrapper[7271]: I0313 10:44:31.131207 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" event={"ID":"928b1766-6bac-4fba-a982-42b050581bd0","Type":"ContainerDied","Data":"9ca3419fe1bea0f004715b5f12f9d37638ad8ed87d0137b29fcb97cd44dcacd7"} Mar 13 10:44:31.131654 master-0 kubenswrapper[7271]: I0313 10:44:31.131238 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq" event={"ID":"928b1766-6bac-4fba-a982-42b050581bd0","Type":"ContainerDied","Data":"41b9f064863771d179e00d0168af1f05c078d7dbb7b63cd568d989269e34a9d4"} Mar 13 10:44:31.132767 master-0 kubenswrapper[7271]: I0313 10:44:31.132566 7271 generic.go:334] "Generic (PLEG): container finished" podID="05f7830b-51cc-45d2-bbb3-ac01eeed57ac" containerID="e30ddedef616e76982e7503ccdc6b701bfe5c6467184889999283ee9de5f7a92" exitCode=0 Mar 13 10:44:31.132767 master-0 kubenswrapper[7271]: I0313 10:44:31.132602 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"05f7830b-51cc-45d2-bbb3-ac01eeed57ac","Type":"ContainerDied","Data":"e30ddedef616e76982e7503ccdc6b701bfe5c6467184889999283ee9de5f7a92"} Mar 13 10:44:31.200084 master-0 kubenswrapper[7271]: I0313 10:44:31.200052 7271 scope.go:117] "RemoveContainer" containerID="2d864bc828901257862b939c2a039cb41544958dc7408d9e7962486322c60d58" Mar 13 10:44:31.200535 master-0 kubenswrapper[7271]: E0313 10:44:31.200500 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d864bc828901257862b939c2a039cb41544958dc7408d9e7962486322c60d58\": container with ID starting with 2d864bc828901257862b939c2a039cb41544958dc7408d9e7962486322c60d58 not found: ID does not exist" containerID="2d864bc828901257862b939c2a039cb41544958dc7408d9e7962486322c60d58" Mar 13 10:44:31.200594 master-0 kubenswrapper[7271]: I0313 10:44:31.200536 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d864bc828901257862b939c2a039cb41544958dc7408d9e7962486322c60d58"} err="failed to get container status \"2d864bc828901257862b939c2a039cb41544958dc7408d9e7962486322c60d58\": rpc error: code = NotFound desc = could not find container \"2d864bc828901257862b939c2a039cb41544958dc7408d9e7962486322c60d58\": container with ID starting with 2d864bc828901257862b939c2a039cb41544958dc7408d9e7962486322c60d58 not found: ID does not exist" Mar 13 10:44:31.200594 master-0 kubenswrapper[7271]: I0313 10:44:31.200572 7271 scope.go:117] "RemoveContainer" containerID="3bbb19054cdef32aad8515587717178e7bce7c315eb6bc762119d4e27dd7a9b0" Mar 13 10:44:31.222064 master-0 kubenswrapper[7271]: I0313 10:44:31.221159 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4mbz\" (UniqueName: \"kubernetes.io/projected/928b1766-6bac-4fba-a982-42b050581bd0-kube-api-access-r4mbz\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:31.222064 master-0 kubenswrapper[7271]: I0313 10:44:31.221194 7271 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928b1766-6bac-4fba-a982-42b050581bd0-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:31.222064 master-0 kubenswrapper[7271]: I0313 10:44:31.221206 7271 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:31.222064 master-0 kubenswrapper[7271]: I0313 10:44:31.221216 7271 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:31.222064 master-0 kubenswrapper[7271]: I0313 10:44:31.221225 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j7m9\" (UniqueName: \"kubernetes.io/projected/6a5bf208-6131-44f5-b92e-6962af670a6c-kube-api-access-4j7m9\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:31.222064 master-0 kubenswrapper[7271]: I0313 10:44:31.221234 7271 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928b1766-6bac-4fba-a982-42b050581bd0-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:31.222064 master-0 kubenswrapper[7271]: I0313 10:44:31.221242 7271 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a5bf208-6131-44f5-b92e-6962af670a6c-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:31.222064 master-0 kubenswrapper[7271]: I0313 10:44:31.221250 7271 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a5bf208-6131-44f5-b92e-6962af670a6c-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:31.222064 master-0 kubenswrapper[7271]: I0313 10:44:31.221259 7271 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928b1766-6bac-4fba-a982-42b050581bd0-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:31.222064 master-0 kubenswrapper[7271]: I0313 10:44:31.221387 7271 scope.go:117] "RemoveContainer" containerID="9ca3419fe1bea0f004715b5f12f9d37638ad8ed87d0137b29fcb97cd44dcacd7" Mar 13 10:44:31.229463 master-0 kubenswrapper[7271]: I0313 10:44:31.229417 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq"] Mar 13 10:44:31.238307 master-0 kubenswrapper[7271]: I0313 10:44:31.238239 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59bc577c56-74qpq"] Mar 13 10:44:31.244863 master-0 kubenswrapper[7271]: I0313 10:44:31.244817 7271 scope.go:117] "RemoveContainer" containerID="9ca3419fe1bea0f004715b5f12f9d37638ad8ed87d0137b29fcb97cd44dcacd7" Mar 13 10:44:31.246298 master-0 kubenswrapper[7271]: E0313 10:44:31.246253 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ca3419fe1bea0f004715b5f12f9d37638ad8ed87d0137b29fcb97cd44dcacd7\": container with ID starting with 9ca3419fe1bea0f004715b5f12f9d37638ad8ed87d0137b29fcb97cd44dcacd7 not found: ID does not exist" containerID="9ca3419fe1bea0f004715b5f12f9d37638ad8ed87d0137b29fcb97cd44dcacd7" Mar 13 10:44:31.246372 master-0 kubenswrapper[7271]: I0313 10:44:31.246292 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ca3419fe1bea0f004715b5f12f9d37638ad8ed87d0137b29fcb97cd44dcacd7"} err="failed to get container status \"9ca3419fe1bea0f004715b5f12f9d37638ad8ed87d0137b29fcb97cd44dcacd7\": rpc error: code = NotFound desc = could not find container \"9ca3419fe1bea0f004715b5f12f9d37638ad8ed87d0137b29fcb97cd44dcacd7\": container with ID starting with 9ca3419fe1bea0f004715b5f12f9d37638ad8ed87d0137b29fcb97cd44dcacd7 not found: ID does not exist" Mar 13 10:44:31.485538 master-0 kubenswrapper[7271]: I0313 10:44:31.484681 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb"] Mar 13 10:44:31.488506 master-0 kubenswrapper[7271]: I0313 10:44:31.488470 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f9d5c7f9d-n8tcb"] Mar 13 10:44:31.492224 master-0 kubenswrapper[7271]: I0313 10:44:31.492187 7271 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 10:44:31.677515 master-0 kubenswrapper[7271]: I0313 10:44:31.677436 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a5bf208-6131-44f5-b92e-6962af670a6c" path="/var/lib/kubelet/pods/6a5bf208-6131-44f5-b92e-6962af670a6c/volumes" Mar 13 10:44:31.678024 master-0 kubenswrapper[7271]: I0313 10:44:31.677993 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="928b1766-6bac-4fba-a982-42b050581bd0" path="/var/lib/kubelet/pods/928b1766-6bac-4fba-a982-42b050581bd0/volumes" Mar 13 10:44:31.678367 master-0 kubenswrapper[7271]: I0313 10:44:31.678339 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a56802af72ce1aac6b5077f1695ac0" path="/var/lib/kubelet/pods/a1a56802af72ce1aac6b5077f1695ac0/volumes" Mar 13 10:44:31.678596 master-0 kubenswrapper[7271]: I0313 10:44:31.678554 7271 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 13 10:44:31.694499 master-0 kubenswrapper[7271]: I0313 10:44:31.694401 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 10:44:31.694499 master-0 kubenswrapper[7271]: I0313 10:44:31.694476 7271 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="46c22bb5-2b6e-4123-8cf1-b715c28e4ca2" Mar 13 10:44:31.697344 master-0 kubenswrapper[7271]: I0313 10:44:31.697282 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 10:44:31.697344 master-0 kubenswrapper[7271]: I0313 10:44:31.697325 7271 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="46c22bb5-2b6e-4123-8cf1-b715c28e4ca2" Mar 13 10:44:31.838143 master-0 kubenswrapper[7271]: I0313 10:44:31.838032 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-6745c97c48-vsk4v"] Mar 13 10:44:31.838449 master-0 kubenswrapper[7271]: E0313 10:44:31.838416 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928b1766-6bac-4fba-a982-42b050581bd0" containerName="route-controller-manager" Mar 13 10:44:31.838449 master-0 kubenswrapper[7271]: I0313 10:44:31.838443 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="928b1766-6bac-4fba-a982-42b050581bd0" containerName="route-controller-manager" Mar 13 10:44:31.838556 master-0 kubenswrapper[7271]: E0313 10:44:31.838473 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a5bf208-6131-44f5-b92e-6962af670a6c" containerName="controller-manager" Mar 13 10:44:31.838556 master-0 kubenswrapper[7271]: I0313 10:44:31.838484 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a5bf208-6131-44f5-b92e-6962af670a6c" containerName="controller-manager" Mar 13 10:44:31.838694 master-0 kubenswrapper[7271]: I0313 10:44:31.838661 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a5bf208-6131-44f5-b92e-6962af670a6c" containerName="controller-manager" Mar 13 10:44:31.838694 master-0 kubenswrapper[7271]: I0313 10:44:31.838685 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="928b1766-6bac-4fba-a982-42b050581bd0" containerName="route-controller-manager" Mar 13 10:44:31.840038 master-0 kubenswrapper[7271]: I0313 10:44:31.840006 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:31.842451 master-0 kubenswrapper[7271]: I0313 10:44:31.842409 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6575454847-qh469"] Mar 13 10:44:31.843289 master-0 kubenswrapper[7271]: I0313 10:44:31.843258 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:31.844318 master-0 kubenswrapper[7271]: I0313 10:44:31.844272 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 13 10:44:31.844399 master-0 kubenswrapper[7271]: I0313 10:44:31.844354 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 13 10:44:31.844488 master-0 kubenswrapper[7271]: I0313 10:44:31.844464 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-vpkw6" Mar 13 10:44:31.844554 master-0 kubenswrapper[7271]: I0313 10:44:31.844502 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 13 10:44:31.844870 master-0 kubenswrapper[7271]: I0313 10:44:31.844707 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 13 10:44:31.845850 master-0 kubenswrapper[7271]: I0313 10:44:31.845614 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 13 10:44:31.847919 master-0 kubenswrapper[7271]: I0313 10:44:31.847641 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 10:44:31.847919 master-0 kubenswrapper[7271]: I0313 10:44:31.847699 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-fn5mm" Mar 13 10:44:31.847919 master-0 kubenswrapper[7271]: I0313 10:44:31.847750 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 10:44:31.847919 master-0 kubenswrapper[7271]: I0313 10:44:31.847656 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 10:44:31.847919 master-0 kubenswrapper[7271]: I0313 10:44:31.847848 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 10:44:31.847919 master-0 kubenswrapper[7271]: I0313 10:44:31.847890 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 10:44:31.852147 master-0 kubenswrapper[7271]: I0313 10:44:31.851433 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b"] Mar 13 10:44:31.853319 master-0 kubenswrapper[7271]: I0313 10:44:31.852606 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:31.854569 master-0 kubenswrapper[7271]: I0313 10:44:31.854069 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 13 10:44:31.857839 master-0 kubenswrapper[7271]: I0313 10:44:31.857744 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-l5tkf" Mar 13 10:44:31.858465 master-0 kubenswrapper[7271]: I0313 10:44:31.858062 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 10:44:31.858633 master-0 kubenswrapper[7271]: I0313 10:44:31.858124 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 10:44:31.858717 master-0 kubenswrapper[7271]: I0313 10:44:31.858258 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 10:44:31.858837 master-0 kubenswrapper[7271]: I0313 10:44:31.858604 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6575454847-qh469"] Mar 13 10:44:31.858874 master-0 kubenswrapper[7271]: I0313 10:44:31.858358 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 10:44:31.858993 master-0 kubenswrapper[7271]: I0313 10:44:31.858406 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 10:44:31.865999 master-0 kubenswrapper[7271]: I0313 10:44:31.865964 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-6745c97c48-vsk4v"] Mar 13 10:44:31.866491 master-0 kubenswrapper[7271]: I0313 10:44:31.866463 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 10:44:31.884881 master-0 kubenswrapper[7271]: I0313 10:44:31.883409 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:31.884881 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:31.884881 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:31.884881 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:31.884881 master-0 kubenswrapper[7271]: I0313 10:44:31.883461 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:31.888716 master-0 kubenswrapper[7271]: I0313 10:44:31.887726 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b"] Mar 13 10:44:32.032143 master-0 kubenswrapper[7271]: I0313 10:44:32.032083 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-secret-telemeter-client\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032174 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-client-ca\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032211 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/164610fc-3942-4e85-9f80-a335c9efcc2f-serving-cert\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032240 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-serving-certs-ca-bundle\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032267 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-federate-client-tls\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032353 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-metrics-client-ca\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032406 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b16eac-51dc-486d-88f0-72fc29a91aa0-config\") pod \"route-controller-manager-6575454847-qh469\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032448 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032476 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rxh9\" (UniqueName: \"kubernetes.io/projected/79b16eac-51dc-486d-88f0-72fc29a91aa0-kube-api-access-9rxh9\") pod \"route-controller-manager-6575454847-qh469\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032511 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032565 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79b16eac-51dc-486d-88f0-72fc29a91aa0-client-ca\") pod \"route-controller-manager-6575454847-qh469\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032651 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-proxy-ca-bundles\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032680 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-telemeter-client-tls\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032736 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncfd8\" (UniqueName: \"kubernetes.io/projected/164610fc-3942-4e85-9f80-a335c9efcc2f-kube-api-access-ncfd8\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032775 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndn2f\" (UniqueName: \"kubernetes.io/projected/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-kube-api-access-ndn2f\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.032808 master-0 kubenswrapper[7271]: I0313 10:44:32.032809 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-config\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.033312 master-0 kubenswrapper[7271]: I0313 10:44:32.032841 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79b16eac-51dc-486d-88f0-72fc29a91aa0-serving-cert\") pod \"route-controller-manager-6575454847-qh469\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:32.133729 master-0 kubenswrapper[7271]: I0313 10:44:32.133657 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncfd8\" (UniqueName: \"kubernetes.io/projected/164610fc-3942-4e85-9f80-a335c9efcc2f-kube-api-access-ncfd8\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.133729 master-0 kubenswrapper[7271]: I0313 10:44:32.133734 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndn2f\" (UniqueName: \"kubernetes.io/projected/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-kube-api-access-ndn2f\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.134056 master-0 kubenswrapper[7271]: I0313 10:44:32.133761 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-config\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.134056 master-0 kubenswrapper[7271]: I0313 10:44:32.133977 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79b16eac-51dc-486d-88f0-72fc29a91aa0-serving-cert\") pod \"route-controller-manager-6575454847-qh469\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:32.134143 master-0 kubenswrapper[7271]: I0313 10:44:32.134048 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-secret-telemeter-client\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.134143 master-0 kubenswrapper[7271]: I0313 10:44:32.134093 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-client-ca\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.134700 master-0 kubenswrapper[7271]: I0313 10:44:32.134658 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/164610fc-3942-4e85-9f80-a335c9efcc2f-serving-cert\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.134755 master-0 kubenswrapper[7271]: I0313 10:44:32.134706 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-serving-certs-ca-bundle\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.134755 master-0 kubenswrapper[7271]: I0313 10:44:32.134735 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-federate-client-tls\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.134823 master-0 kubenswrapper[7271]: I0313 10:44:32.134801 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-metrics-client-ca\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.134857 master-0 kubenswrapper[7271]: I0313 10:44:32.134833 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b16eac-51dc-486d-88f0-72fc29a91aa0-config\") pod \"route-controller-manager-6575454847-qh469\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:32.137042 master-0 kubenswrapper[7271]: I0313 10:44:32.134954 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.137042 master-0 kubenswrapper[7271]: I0313 10:44:32.135007 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rxh9\" (UniqueName: \"kubernetes.io/projected/79b16eac-51dc-486d-88f0-72fc29a91aa0-kube-api-access-9rxh9\") pod \"route-controller-manager-6575454847-qh469\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:32.137042 master-0 kubenswrapper[7271]: I0313 10:44:32.135050 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.137042 master-0 kubenswrapper[7271]: I0313 10:44:32.135124 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79b16eac-51dc-486d-88f0-72fc29a91aa0-client-ca\") pod \"route-controller-manager-6575454847-qh469\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:32.137042 master-0 kubenswrapper[7271]: I0313 10:44:32.135174 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-proxy-ca-bundles\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.137042 master-0 kubenswrapper[7271]: I0313 10:44:32.135211 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-telemeter-client-tls\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.137042 master-0 kubenswrapper[7271]: I0313 10:44:32.135622 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-config\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.137042 master-0 kubenswrapper[7271]: I0313 10:44:32.135678 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-client-ca\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.137042 master-0 kubenswrapper[7271]: I0313 10:44:32.135758 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-metrics-client-ca\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.137042 master-0 kubenswrapper[7271]: I0313 10:44:32.136248 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b16eac-51dc-486d-88f0-72fc29a91aa0-config\") pod \"route-controller-manager-6575454847-qh469\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:32.137042 master-0 kubenswrapper[7271]: I0313 10:44:32.136409 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79b16eac-51dc-486d-88f0-72fc29a91aa0-client-ca\") pod \"route-controller-manager-6575454847-qh469\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:32.137042 master-0 kubenswrapper[7271]: I0313 10:44:32.136950 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.138497 master-0 kubenswrapper[7271]: I0313 10:44:32.137112 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-serving-certs-ca-bundle\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.138497 master-0 kubenswrapper[7271]: I0313 10:44:32.137151 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-proxy-ca-bundles\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.139135 master-0 kubenswrapper[7271]: I0313 10:44:32.139047 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79b16eac-51dc-486d-88f0-72fc29a91aa0-serving-cert\") pod \"route-controller-manager-6575454847-qh469\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:32.140039 master-0 kubenswrapper[7271]: I0313 10:44:32.139725 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/164610fc-3942-4e85-9f80-a335c9efcc2f-serving-cert\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.140247 master-0 kubenswrapper[7271]: I0313 10:44:32.140193 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-telemeter-client-tls\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.144318 master-0 kubenswrapper[7271]: I0313 10:44:32.144284 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-federate-client-tls\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.152040 master-0 kubenswrapper[7271]: I0313 10:44:32.152006 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rxh9\" (UniqueName: \"kubernetes.io/projected/79b16eac-51dc-486d-88f0-72fc29a91aa0-kube-api-access-9rxh9\") pod \"route-controller-manager-6575454847-qh469\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:32.152203 master-0 kubenswrapper[7271]: I0313 10:44:32.152164 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.152427 master-0 kubenswrapper[7271]: I0313 10:44:32.152377 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-secret-telemeter-client\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.154506 master-0 kubenswrapper[7271]: I0313 10:44:32.154478 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842"} Mar 13 10:44:32.154574 master-0 kubenswrapper[7271]: I0313 10:44:32.154516 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee"} Mar 13 10:44:32.154574 master-0 kubenswrapper[7271]: I0313 10:44:32.154528 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675"} Mar 13 10:44:32.155185 master-0 kubenswrapper[7271]: I0313 10:44:32.155027 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncfd8\" (UniqueName: \"kubernetes.io/projected/164610fc-3942-4e85-9f80-a335c9efcc2f-kube-api-access-ncfd8\") pod \"controller-manager-75b4cdcbf-pwt9b\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.163748 master-0 kubenswrapper[7271]: I0313 10:44:32.163010 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndn2f\" (UniqueName: \"kubernetes.io/projected/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-kube-api-access-ndn2f\") pod \"telemeter-client-6745c97c48-vsk4v\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.173411 master-0 kubenswrapper[7271]: E0313 10:44:32.173332 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:44:32.174323 master-0 kubenswrapper[7271]: I0313 10:44:32.173745 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.173732143 podStartE2EDuration="2.173732143s" podCreationTimestamp="2026-03-13 10:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:44:32.172415957 +0000 UTC m=+526.699238347" watchObservedRunningTime="2026-03-13 10:44:32.173732143 +0000 UTC m=+526.700554533" Mar 13 10:44:32.175170 master-0 kubenswrapper[7271]: E0313 10:44:32.175103 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:44:32.177064 master-0 kubenswrapper[7271]: E0313 10:44:32.176888 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:44:32.177162 master-0 kubenswrapper[7271]: E0313 10:44:32.177073 7271 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" podUID="d1100866-59a5-4653-b8eb-7945515ae057" containerName="kube-multus-additional-cni-plugins" Mar 13 10:44:32.187735 master-0 kubenswrapper[7271]: I0313 10:44:32.187678 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:44:32.200328 master-0 kubenswrapper[7271]: I0313 10:44:32.200262 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:32.218436 master-0 kubenswrapper[7271]: I0313 10:44:32.217987 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:32.371854 master-0 kubenswrapper[7271]: I0313 10:44:32.371796 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-rshw5"] Mar 13 10:44:32.373055 master-0 kubenswrapper[7271]: I0313 10:44:32.373007 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" Mar 13 10:44:32.383071 master-0 kubenswrapper[7271]: I0313 10:44:32.381403 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-rshw5"] Mar 13 10:44:32.392831 master-0 kubenswrapper[7271]: I0313 10:44:32.390748 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-rxbss" Mar 13 10:44:32.493728 master-0 kubenswrapper[7271]: I0313 10:44:32.493090 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 10:44:32.546895 master-0 kubenswrapper[7271]: I0313 10:44:32.546848 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh6kl\" (UniqueName: \"kubernetes.io/projected/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-kube-api-access-gh6kl\") pod \"multus-admission-controller-7769569c45-rshw5\" (UID: \"14f6e3b2-716c-4392-b3c8-75b2168ccfb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" Mar 13 10:44:32.547066 master-0 kubenswrapper[7271]: I0313 10:44:32.546963 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-webhook-certs\") pod \"multus-admission-controller-7769569c45-rshw5\" (UID: \"14f6e3b2-716c-4392-b3c8-75b2168ccfb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" Mar 13 10:44:32.648497 master-0 kubenswrapper[7271]: I0313 10:44:32.648375 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-kube-api-access\") pod \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\" (UID: \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\") " Mar 13 10:44:32.648497 master-0 kubenswrapper[7271]: I0313 10:44:32.648473 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-kubelet-dir\") pod \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\" (UID: \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\") " Mar 13 10:44:32.648730 master-0 kubenswrapper[7271]: I0313 10:44:32.648613 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-var-lock\") pod \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\" (UID: \"05f7830b-51cc-45d2-bbb3-ac01eeed57ac\") " Mar 13 10:44:32.649016 master-0 kubenswrapper[7271]: I0313 10:44:32.648922 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-webhook-certs\") pod \"multus-admission-controller-7769569c45-rshw5\" (UID: \"14f6e3b2-716c-4392-b3c8-75b2168ccfb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" Mar 13 10:44:32.649080 master-0 kubenswrapper[7271]: I0313 10:44:32.649052 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh6kl\" (UniqueName: \"kubernetes.io/projected/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-kube-api-access-gh6kl\") pod \"multus-admission-controller-7769569c45-rshw5\" (UID: \"14f6e3b2-716c-4392-b3c8-75b2168ccfb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" Mar 13 10:44:32.649185 master-0 kubenswrapper[7271]: I0313 10:44:32.649153 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "05f7830b-51cc-45d2-bbb3-ac01eeed57ac" (UID: "05f7830b-51cc-45d2-bbb3-ac01eeed57ac"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:44:32.649219 master-0 kubenswrapper[7271]: I0313 10:44:32.649173 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-var-lock" (OuterVolumeSpecName: "var-lock") pod "05f7830b-51cc-45d2-bbb3-ac01eeed57ac" (UID: "05f7830b-51cc-45d2-bbb3-ac01eeed57ac"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:44:32.652490 master-0 kubenswrapper[7271]: I0313 10:44:32.652437 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "05f7830b-51cc-45d2-bbb3-ac01eeed57ac" (UID: "05f7830b-51cc-45d2-bbb3-ac01eeed57ac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:44:32.659057 master-0 kubenswrapper[7271]: I0313 10:44:32.659000 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-webhook-certs\") pod \"multus-admission-controller-7769569c45-rshw5\" (UID: \"14f6e3b2-716c-4392-b3c8-75b2168ccfb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" Mar 13 10:44:32.669922 master-0 kubenswrapper[7271]: I0313 10:44:32.669864 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh6kl\" (UniqueName: \"kubernetes.io/projected/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-kube-api-access-gh6kl\") pod \"multus-admission-controller-7769569c45-rshw5\" (UID: \"14f6e3b2-716c-4392-b3c8-75b2168ccfb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" Mar 13 10:44:32.677948 master-0 kubenswrapper[7271]: I0313 10:44:32.677905 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-6745c97c48-vsk4v"] Mar 13 10:44:32.682534 master-0 kubenswrapper[7271]: W0313 10:44:32.682490 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc4f01ba_a729_4cc8_a2d6_b4efe197efe3.slice/crio-fc06a3a56daeb8681fdfd097c3d110b5a504914fb61e4dd4e8750b841edf5a9b WatchSource:0}: Error finding container fc06a3a56daeb8681fdfd097c3d110b5a504914fb61e4dd4e8750b841edf5a9b: Status 404 returned error can't find the container with id fc06a3a56daeb8681fdfd097c3d110b5a504914fb61e4dd4e8750b841edf5a9b Mar 13 10:44:32.686507 master-0 kubenswrapper[7271]: I0313 10:44:32.686458 7271 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 10:44:32.705963 master-0 kubenswrapper[7271]: I0313 10:44:32.705795 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" Mar 13 10:44:32.751759 master-0 kubenswrapper[7271]: I0313 10:44:32.751718 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:32.751999 master-0 kubenswrapper[7271]: I0313 10:44:32.751983 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:32.752185 master-0 kubenswrapper[7271]: I0313 10:44:32.752094 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05f7830b-51cc-45d2-bbb3-ac01eeed57ac-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:32.761775 master-0 kubenswrapper[7271]: I0313 10:44:32.761732 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6575454847-qh469"] Mar 13 10:44:32.830268 master-0 kubenswrapper[7271]: I0313 10:44:32.829891 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b"] Mar 13 10:44:32.849723 master-0 kubenswrapper[7271]: W0313 10:44:32.848723 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod164610fc_3942_4e85_9f80_a335c9efcc2f.slice/crio-a303a7aee439985c205f48471f9e246f6eeaa905a4431f515631510faa7d2fc6 WatchSource:0}: Error finding container a303a7aee439985c205f48471f9e246f6eeaa905a4431f515631510faa7d2fc6: Status 404 returned error can't find the container with id a303a7aee439985c205f48471f9e246f6eeaa905a4431f515631510faa7d2fc6 Mar 13 10:44:32.882770 master-0 kubenswrapper[7271]: I0313 10:44:32.882571 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:32.882770 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:32.882770 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:32.882770 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:32.882770 master-0 kubenswrapper[7271]: I0313 10:44:32.882622 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:33.162872 master-0 kubenswrapper[7271]: I0313 10:44:33.162732 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" event={"ID":"79b16eac-51dc-486d-88f0-72fc29a91aa0","Type":"ContainerStarted","Data":"ce4838a975f3b35227cf7d2639330792b71d35b0b4547f09f09222326bac5463"} Mar 13 10:44:33.162872 master-0 kubenswrapper[7271]: I0313 10:44:33.162778 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" event={"ID":"79b16eac-51dc-486d-88f0-72fc29a91aa0","Type":"ContainerStarted","Data":"a969cd9916bc227ecbb885bc70a2df00d168dd97f35b5236c2a8d707004539e1"} Mar 13 10:44:33.163475 master-0 kubenswrapper[7271]: I0313 10:44:33.162926 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:33.163829 master-0 kubenswrapper[7271]: I0313 10:44:33.163796 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" event={"ID":"164610fc-3942-4e85-9f80-a335c9efcc2f","Type":"ContainerStarted","Data":"f14be7a6cc1153177570eba843301e4ca33df79e9b92bb294b1946dcfdfc5aa8"} Mar 13 10:44:33.163829 master-0 kubenswrapper[7271]: I0313 10:44:33.163829 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" event={"ID":"164610fc-3942-4e85-9f80-a335c9efcc2f","Type":"ContainerStarted","Data":"a303a7aee439985c205f48471f9e246f6eeaa905a4431f515631510faa7d2fc6"} Mar 13 10:44:33.164814 master-0 kubenswrapper[7271]: I0313 10:44:33.164765 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:33.166965 master-0 kubenswrapper[7271]: I0313 10:44:33.166916 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-rshw5"] Mar 13 10:44:33.170796 master-0 kubenswrapper[7271]: I0313 10:44:33.170748 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:33.175222 master-0 kubenswrapper[7271]: I0313 10:44:33.174439 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 10:44:33.178609 master-0 kubenswrapper[7271]: I0313 10:44:33.176420 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"05f7830b-51cc-45d2-bbb3-ac01eeed57ac","Type":"ContainerDied","Data":"8806b35c314af20732bacfc49d8a1556a0a610503737d33085a256d12444c681"} Mar 13 10:44:33.178609 master-0 kubenswrapper[7271]: I0313 10:44:33.176455 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8806b35c314af20732bacfc49d8a1556a0a610503737d33085a256d12444c681" Mar 13 10:44:33.178609 master-0 kubenswrapper[7271]: I0313 10:44:33.176468 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" event={"ID":"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3","Type":"ContainerStarted","Data":"fc06a3a56daeb8681fdfd097c3d110b5a504914fb61e4dd4e8750b841edf5a9b"} Mar 13 10:44:33.178609 master-0 kubenswrapper[7271]: I0313 10:44:33.176483 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:44:33.199610 master-0 kubenswrapper[7271]: I0313 10:44:33.199148 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" podStartSLOduration=3.199127931 podStartE2EDuration="3.199127931s" podCreationTimestamp="2026-03-13 10:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:44:33.191861933 +0000 UTC m=+527.718684313" watchObservedRunningTime="2026-03-13 10:44:33.199127931 +0000 UTC m=+527.725950321" Mar 13 10:44:33.232608 master-0 kubenswrapper[7271]: I0313 10:44:33.231397 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" podStartSLOduration=3.231382409 podStartE2EDuration="3.231382409s" podCreationTimestamp="2026-03-13 10:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:44:33.229168059 +0000 UTC m=+527.755990449" watchObservedRunningTime="2026-03-13 10:44:33.231382409 +0000 UTC m=+527.758204799" Mar 13 10:44:33.303450 master-0 kubenswrapper[7271]: I0313 10:44:33.302716 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:33.883484 master-0 kubenswrapper[7271]: I0313 10:44:33.883403 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:33.883484 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:33.883484 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:33.883484 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:33.883771 master-0 kubenswrapper[7271]: I0313 10:44:33.883540 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:34.184815 master-0 kubenswrapper[7271]: I0313 10:44:34.184763 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" event={"ID":"14f6e3b2-716c-4392-b3c8-75b2168ccfb7","Type":"ContainerStarted","Data":"e81dffff6f13ab49bd692f744da8f1e5846de6dfced2e8388c2b2a93c4b96d8f"} Mar 13 10:44:34.185364 master-0 kubenswrapper[7271]: I0313 10:44:34.184823 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" event={"ID":"14f6e3b2-716c-4392-b3c8-75b2168ccfb7","Type":"ContainerStarted","Data":"12234566fa74a53e9cead3608885bd75acc12e047b5023d615978a82f61c27ab"} Mar 13 10:44:34.185364 master-0 kubenswrapper[7271]: I0313 10:44:34.184843 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" event={"ID":"14f6e3b2-716c-4392-b3c8-75b2168ccfb7","Type":"ContainerStarted","Data":"d019d509921c4d166cd7651a1a35a29172d4e8a0b6f47b7d8c8b1a18d02dbf3c"} Mar 13 10:44:34.199159 master-0 kubenswrapper[7271]: I0313 10:44:34.199036 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" podStartSLOduration=2.198991065 podStartE2EDuration="2.198991065s" podCreationTimestamp="2026-03-13 10:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:44:34.19734003 +0000 UTC m=+528.724162420" watchObservedRunningTime="2026-03-13 10:44:34.198991065 +0000 UTC m=+528.725813455" Mar 13 10:44:34.238074 master-0 kubenswrapper[7271]: I0313 10:44:34.236305 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-d787l"] Mar 13 10:44:34.238074 master-0 kubenswrapper[7271]: I0313 10:44:34.236720 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" podUID="95339220-324d-45e7-bdc2-e4f42fbd1d32" containerName="multus-admission-controller" containerID="cri-o://6ac08771019787a7c11813b1fc15b8b6c6e6e35ed0a49a438a259a987603471f" gracePeriod=30 Mar 13 10:44:34.238074 master-0 kubenswrapper[7271]: I0313 10:44:34.237054 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" podUID="95339220-324d-45e7-bdc2-e4f42fbd1d32" containerName="kube-rbac-proxy" containerID="cri-o://b8b86d02f4f86b49f256fe88515a474a9fb718a6bd218f138f4504fc8b7c89fc" gracePeriod=30 Mar 13 10:44:34.883183 master-0 kubenswrapper[7271]: I0313 10:44:34.883098 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:34.883183 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:34.883183 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:34.883183 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:34.883183 master-0 kubenswrapper[7271]: I0313 10:44:34.883169 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:35.193134 master-0 kubenswrapper[7271]: I0313 10:44:35.193034 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" event={"ID":"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3","Type":"ContainerStarted","Data":"83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f"} Mar 13 10:44:35.195087 master-0 kubenswrapper[7271]: I0313 10:44:35.195027 7271 generic.go:334] "Generic (PLEG): container finished" podID="95339220-324d-45e7-bdc2-e4f42fbd1d32" containerID="b8b86d02f4f86b49f256fe88515a474a9fb718a6bd218f138f4504fc8b7c89fc" exitCode=0 Mar 13 10:44:35.195213 master-0 kubenswrapper[7271]: I0313 10:44:35.195120 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" event={"ID":"95339220-324d-45e7-bdc2-e4f42fbd1d32","Type":"ContainerDied","Data":"b8b86d02f4f86b49f256fe88515a474a9fb718a6bd218f138f4504fc8b7c89fc"} Mar 13 10:44:35.883040 master-0 kubenswrapper[7271]: I0313 10:44:35.882993 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:35.883040 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:35.883040 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:35.883040 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:35.883401 master-0 kubenswrapper[7271]: I0313 10:44:35.883369 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:36.882429 master-0 kubenswrapper[7271]: I0313 10:44:36.882381 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:36.882429 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:36.882429 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:36.882429 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:36.883070 master-0 kubenswrapper[7271]: I0313 10:44:36.882438 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:37.209900 master-0 kubenswrapper[7271]: I0313 10:44:37.209832 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" event={"ID":"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3","Type":"ContainerStarted","Data":"0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4"} Mar 13 10:44:37.209900 master-0 kubenswrapper[7271]: I0313 10:44:37.209889 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" event={"ID":"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3","Type":"ContainerStarted","Data":"3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769"} Mar 13 10:44:37.234556 master-0 kubenswrapper[7271]: I0313 10:44:37.234446 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" podStartSLOduration=3.47070061 podStartE2EDuration="7.234422937s" podCreationTimestamp="2026-03-13 10:44:30 +0000 UTC" firstStartedPulling="2026-03-13 10:44:32.686351525 +0000 UTC m=+527.213173915" lastFinishedPulling="2026-03-13 10:44:36.450073852 +0000 UTC m=+530.976896242" observedRunningTime="2026-03-13 10:44:37.226381908 +0000 UTC m=+531.753204308" watchObservedRunningTime="2026-03-13 10:44:37.234422937 +0000 UTC m=+531.761245327" Mar 13 10:44:37.883675 master-0 kubenswrapper[7271]: I0313 10:44:37.883577 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:37.883675 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:37.883675 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:37.883675 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:37.884528 master-0 kubenswrapper[7271]: I0313 10:44:37.883751 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:38.883824 master-0 kubenswrapper[7271]: I0313 10:44:38.883743 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:38.883824 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:38.883824 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:38.883824 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:38.883824 master-0 kubenswrapper[7271]: I0313 10:44:38.883817 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:39.883792 master-0 kubenswrapper[7271]: I0313 10:44:39.883700 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:39.883792 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:39.883792 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:39.883792 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:39.884848 master-0 kubenswrapper[7271]: I0313 10:44:39.883816 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:40.882929 master-0 kubenswrapper[7271]: I0313 10:44:40.882864 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:40.882929 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:40.882929 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:40.882929 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:40.883227 master-0 kubenswrapper[7271]: I0313 10:44:40.882936 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:41.883201 master-0 kubenswrapper[7271]: I0313 10:44:41.883136 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:41.883201 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:41.883201 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:41.883201 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:41.883925 master-0 kubenswrapper[7271]: I0313 10:44:41.883213 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:42.172204 master-0 kubenswrapper[7271]: E0313 10:44:42.172001 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:44:42.174684 master-0 kubenswrapper[7271]: E0313 10:44:42.174558 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:44:42.183196 master-0 kubenswrapper[7271]: E0313 10:44:42.183045 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:44:42.183196 master-0 kubenswrapper[7271]: E0313 10:44:42.183177 7271 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" podUID="d1100866-59a5-4653-b8eb-7945515ae057" containerName="kube-multus-additional-cni-plugins" Mar 13 10:44:42.249839 master-0 kubenswrapper[7271]: I0313 10:44:42.249768 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 13 10:44:42.250194 master-0 kubenswrapper[7271]: E0313 10:44:42.250102 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f7830b-51cc-45d2-bbb3-ac01eeed57ac" containerName="installer" Mar 13 10:44:42.250194 master-0 kubenswrapper[7271]: I0313 10:44:42.250117 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f7830b-51cc-45d2-bbb3-ac01eeed57ac" containerName="installer" Mar 13 10:44:42.250292 master-0 kubenswrapper[7271]: I0313 10:44:42.250239 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f7830b-51cc-45d2-bbb3-ac01eeed57ac" containerName="installer" Mar 13 10:44:42.250754 master-0 kubenswrapper[7271]: I0313 10:44:42.250726 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 10:44:42.255784 master-0 kubenswrapper[7271]: I0313 10:44:42.255123 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-525r2" Mar 13 10:44:42.255784 master-0 kubenswrapper[7271]: I0313 10:44:42.255438 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 10:44:42.266240 master-0 kubenswrapper[7271]: I0313 10:44:42.265939 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 13 10:44:42.411221 master-0 kubenswrapper[7271]: I0313 10:44:42.411161 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3bcb671-5236-49fb-8540-131f18b91fc3-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"b3bcb671-5236-49fb-8540-131f18b91fc3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 10:44:42.411816 master-0 kubenswrapper[7271]: I0313 10:44:42.411797 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3bcb671-5236-49fb-8540-131f18b91fc3-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"b3bcb671-5236-49fb-8540-131f18b91fc3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 10:44:42.412047 master-0 kubenswrapper[7271]: I0313 10:44:42.411986 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3bcb671-5236-49fb-8540-131f18b91fc3-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"b3bcb671-5236-49fb-8540-131f18b91fc3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 10:44:42.514696 master-0 kubenswrapper[7271]: I0313 10:44:42.514470 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3bcb671-5236-49fb-8540-131f18b91fc3-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"b3bcb671-5236-49fb-8540-131f18b91fc3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 10:44:42.514696 master-0 kubenswrapper[7271]: I0313 10:44:42.514651 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3bcb671-5236-49fb-8540-131f18b91fc3-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"b3bcb671-5236-49fb-8540-131f18b91fc3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 10:44:42.515215 master-0 kubenswrapper[7271]: I0313 10:44:42.514686 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3bcb671-5236-49fb-8540-131f18b91fc3-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"b3bcb671-5236-49fb-8540-131f18b91fc3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 10:44:42.515215 master-0 kubenswrapper[7271]: I0313 10:44:42.514874 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3bcb671-5236-49fb-8540-131f18b91fc3-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"b3bcb671-5236-49fb-8540-131f18b91fc3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 10:44:42.515215 master-0 kubenswrapper[7271]: I0313 10:44:42.514902 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3bcb671-5236-49fb-8540-131f18b91fc3-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"b3bcb671-5236-49fb-8540-131f18b91fc3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 10:44:42.532666 master-0 kubenswrapper[7271]: I0313 10:44:42.531783 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3bcb671-5236-49fb-8540-131f18b91fc3-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"b3bcb671-5236-49fb-8540-131f18b91fc3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 10:44:42.591689 master-0 kubenswrapper[7271]: I0313 10:44:42.591621 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 10:44:42.883375 master-0 kubenswrapper[7271]: I0313 10:44:42.883297 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:42.883375 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:42.883375 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:42.883375 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:42.884347 master-0 kubenswrapper[7271]: I0313 10:44:42.883532 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:43.011369 master-0 kubenswrapper[7271]: I0313 10:44:43.011304 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 13 10:44:43.012411 master-0 kubenswrapper[7271]: W0313 10:44:43.012330 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb3bcb671_5236_49fb_8540_131f18b91fc3.slice/crio-d10929038456048d0742620d09ad12198fa061332340d13fe780561ae6f8528b WatchSource:0}: Error finding container d10929038456048d0742620d09ad12198fa061332340d13fe780561ae6f8528b: Status 404 returned error can't find the container with id d10929038456048d0742620d09ad12198fa061332340d13fe780561ae6f8528b Mar 13 10:44:43.259826 master-0 kubenswrapper[7271]: I0313 10:44:43.259764 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"b3bcb671-5236-49fb-8540-131f18b91fc3","Type":"ContainerStarted","Data":"d10929038456048d0742620d09ad12198fa061332340d13fe780561ae6f8528b"} Mar 13 10:44:43.882798 master-0 kubenswrapper[7271]: I0313 10:44:43.882749 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:43.882798 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:43.882798 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:43.882798 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:43.883150 master-0 kubenswrapper[7271]: I0313 10:44:43.882820 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:44.268170 master-0 kubenswrapper[7271]: I0313 10:44:44.268017 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"b3bcb671-5236-49fb-8540-131f18b91fc3","Type":"ContainerStarted","Data":"ff4d81f34f5a41e743d1b5d70a02b2768b8ec6e13c4ca20cdad80babd9b85b66"} Mar 13 10:44:44.285411 master-0 kubenswrapper[7271]: I0313 10:44:44.285328 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=2.285308824 podStartE2EDuration="2.285308824s" podCreationTimestamp="2026-03-13 10:44:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:44:44.283290939 +0000 UTC m=+538.810113359" watchObservedRunningTime="2026-03-13 10:44:44.285308824 +0000 UTC m=+538.812131214" Mar 13 10:44:44.882759 master-0 kubenswrapper[7271]: I0313 10:44:44.882665 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:44.882759 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:44.882759 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:44.882759 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:44.883113 master-0 kubenswrapper[7271]: I0313 10:44:44.882757 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:45.883291 master-0 kubenswrapper[7271]: I0313 10:44:45.883204 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:45.883291 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:45.883291 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:45.883291 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:45.884126 master-0 kubenswrapper[7271]: I0313 10:44:45.883288 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:46.067457 master-0 kubenswrapper[7271]: I0313 10:44:46.067287 7271 scope.go:117] "RemoveContainer" containerID="e29c9e8859ea50a213d1056d538b6a3cc96cdadb35b68c7127f1a2cbb6be6418" Mar 13 10:44:46.883851 master-0 kubenswrapper[7271]: I0313 10:44:46.883794 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:46.883851 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:46.883851 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:46.883851 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:46.883851 master-0 kubenswrapper[7271]: I0313 10:44:46.883862 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:47.883079 master-0 kubenswrapper[7271]: I0313 10:44:47.883026 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:47.883079 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:47.883079 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:47.883079 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:47.883357 master-0 kubenswrapper[7271]: I0313 10:44:47.883098 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:48.830109 master-0 kubenswrapper[7271]: I0313 10:44:48.830036 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 13 10:44:48.830762 master-0 kubenswrapper[7271]: I0313 10:44:48.830282 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="b3bcb671-5236-49fb-8540-131f18b91fc3" containerName="installer" containerID="cri-o://ff4d81f34f5a41e743d1b5d70a02b2768b8ec6e13c4ca20cdad80babd9b85b66" gracePeriod=30 Mar 13 10:44:48.883667 master-0 kubenswrapper[7271]: I0313 10:44:48.883617 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:48.883667 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:48.883667 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:48.883667 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:48.884015 master-0 kubenswrapper[7271]: I0313 10:44:48.883683 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:49.883511 master-0 kubenswrapper[7271]: I0313 10:44:49.883399 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:49.883511 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:49.883511 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:49.883511 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:49.884760 master-0 kubenswrapper[7271]: I0313 10:44:49.883538 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:50.588249 master-0 kubenswrapper[7271]: I0313 10:44:50.588184 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b"] Mar 13 10:44:50.588680 master-0 kubenswrapper[7271]: I0313 10:44:50.588534 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" podUID="164610fc-3942-4e85-9f80-a335c9efcc2f" containerName="controller-manager" containerID="cri-o://f14be7a6cc1153177570eba843301e4ca33df79e9b92bb294b1946dcfdfc5aa8" gracePeriod=30 Mar 13 10:44:50.595708 master-0 kubenswrapper[7271]: I0313 10:44:50.595647 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6575454847-qh469"] Mar 13 10:44:50.595953 master-0 kubenswrapper[7271]: I0313 10:44:50.595913 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" podUID="79b16eac-51dc-486d-88f0-72fc29a91aa0" containerName="route-controller-manager" containerID="cri-o://ce4838a975f3b35227cf7d2639330792b71d35b0b4547f09f09222326bac5463" gracePeriod=30 Mar 13 10:44:50.882977 master-0 kubenswrapper[7271]: I0313 10:44:50.882711 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:50.882977 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:50.882977 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:50.882977 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:50.882977 master-0 kubenswrapper[7271]: I0313 10:44:50.882780 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:50.942914 master-0 kubenswrapper[7271]: I0313 10:44:50.942867 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:51.060658 master-0 kubenswrapper[7271]: I0313 10:44:51.060086 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-client-ca\") pod \"164610fc-3942-4e85-9f80-a335c9efcc2f\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " Mar 13 10:44:51.060962 master-0 kubenswrapper[7271]: I0313 10:44:51.060930 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-config\") pod \"164610fc-3942-4e85-9f80-a335c9efcc2f\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " Mar 13 10:44:51.061005 master-0 kubenswrapper[7271]: I0313 10:44:51.060975 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/164610fc-3942-4e85-9f80-a335c9efcc2f-serving-cert\") pod \"164610fc-3942-4e85-9f80-a335c9efcc2f\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " Mar 13 10:44:51.061573 master-0 kubenswrapper[7271]: I0313 10:44:51.061031 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-proxy-ca-bundles\") pod \"164610fc-3942-4e85-9f80-a335c9efcc2f\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " Mar 13 10:44:51.061653 master-0 kubenswrapper[7271]: I0313 10:44:51.061630 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncfd8\" (UniqueName: \"kubernetes.io/projected/164610fc-3942-4e85-9f80-a335c9efcc2f-kube-api-access-ncfd8\") pod \"164610fc-3942-4e85-9f80-a335c9efcc2f\" (UID: \"164610fc-3942-4e85-9f80-a335c9efcc2f\") " Mar 13 10:44:51.062801 master-0 kubenswrapper[7271]: I0313 10:44:51.062562 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "164610fc-3942-4e85-9f80-a335c9efcc2f" (UID: "164610fc-3942-4e85-9f80-a335c9efcc2f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:44:51.062801 master-0 kubenswrapper[7271]: I0313 10:44:51.062715 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-client-ca" (OuterVolumeSpecName: "client-ca") pod "164610fc-3942-4e85-9f80-a335c9efcc2f" (UID: "164610fc-3942-4e85-9f80-a335c9efcc2f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:44:51.063112 master-0 kubenswrapper[7271]: I0313 10:44:51.062865 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-config" (OuterVolumeSpecName: "config") pod "164610fc-3942-4e85-9f80-a335c9efcc2f" (UID: "164610fc-3942-4e85-9f80-a335c9efcc2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:44:51.065343 master-0 kubenswrapper[7271]: I0313 10:44:51.065284 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/164610fc-3942-4e85-9f80-a335c9efcc2f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "164610fc-3942-4e85-9f80-a335c9efcc2f" (UID: "164610fc-3942-4e85-9f80-a335c9efcc2f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:44:51.065493 master-0 kubenswrapper[7271]: I0313 10:44:51.065451 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/164610fc-3942-4e85-9f80-a335c9efcc2f-kube-api-access-ncfd8" (OuterVolumeSpecName: "kube-api-access-ncfd8") pod "164610fc-3942-4e85-9f80-a335c9efcc2f" (UID: "164610fc-3942-4e85-9f80-a335c9efcc2f"). InnerVolumeSpecName "kube-api-access-ncfd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:44:51.083641 master-0 kubenswrapper[7271]: I0313 10:44:51.082721 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:51.165369 master-0 kubenswrapper[7271]: I0313 10:44:51.163049 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79b16eac-51dc-486d-88f0-72fc29a91aa0-client-ca\") pod \"79b16eac-51dc-486d-88f0-72fc29a91aa0\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " Mar 13 10:44:51.165369 master-0 kubenswrapper[7271]: I0313 10:44:51.163119 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rxh9\" (UniqueName: \"kubernetes.io/projected/79b16eac-51dc-486d-88f0-72fc29a91aa0-kube-api-access-9rxh9\") pod \"79b16eac-51dc-486d-88f0-72fc29a91aa0\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " Mar 13 10:44:51.165369 master-0 kubenswrapper[7271]: I0313 10:44:51.163138 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79b16eac-51dc-486d-88f0-72fc29a91aa0-serving-cert\") pod \"79b16eac-51dc-486d-88f0-72fc29a91aa0\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " Mar 13 10:44:51.165369 master-0 kubenswrapper[7271]: I0313 10:44:51.163248 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b16eac-51dc-486d-88f0-72fc29a91aa0-config\") pod \"79b16eac-51dc-486d-88f0-72fc29a91aa0\" (UID: \"79b16eac-51dc-486d-88f0-72fc29a91aa0\") " Mar 13 10:44:51.165369 master-0 kubenswrapper[7271]: I0313 10:44:51.163456 7271 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:51.165369 master-0 kubenswrapper[7271]: I0313 10:44:51.163475 7271 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/164610fc-3942-4e85-9f80-a335c9efcc2f-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:51.165369 master-0 kubenswrapper[7271]: I0313 10:44:51.163484 7271 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:51.165369 master-0 kubenswrapper[7271]: I0313 10:44:51.163496 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncfd8\" (UniqueName: \"kubernetes.io/projected/164610fc-3942-4e85-9f80-a335c9efcc2f-kube-api-access-ncfd8\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:51.165369 master-0 kubenswrapper[7271]: I0313 10:44:51.163505 7271 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/164610fc-3942-4e85-9f80-a335c9efcc2f-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:51.165369 master-0 kubenswrapper[7271]: I0313 10:44:51.163528 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79b16eac-51dc-486d-88f0-72fc29a91aa0-client-ca" (OuterVolumeSpecName: "client-ca") pod "79b16eac-51dc-486d-88f0-72fc29a91aa0" (UID: "79b16eac-51dc-486d-88f0-72fc29a91aa0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:44:51.165369 master-0 kubenswrapper[7271]: I0313 10:44:51.163961 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79b16eac-51dc-486d-88f0-72fc29a91aa0-config" (OuterVolumeSpecName: "config") pod "79b16eac-51dc-486d-88f0-72fc29a91aa0" (UID: "79b16eac-51dc-486d-88f0-72fc29a91aa0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:44:51.167226 master-0 kubenswrapper[7271]: I0313 10:44:51.166974 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79b16eac-51dc-486d-88f0-72fc29a91aa0-kube-api-access-9rxh9" (OuterVolumeSpecName: "kube-api-access-9rxh9") pod "79b16eac-51dc-486d-88f0-72fc29a91aa0" (UID: "79b16eac-51dc-486d-88f0-72fc29a91aa0"). InnerVolumeSpecName "kube-api-access-9rxh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:44:51.167723 master-0 kubenswrapper[7271]: I0313 10:44:51.167688 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79b16eac-51dc-486d-88f0-72fc29a91aa0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "79b16eac-51dc-486d-88f0-72fc29a91aa0" (UID: "79b16eac-51dc-486d-88f0-72fc29a91aa0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:44:51.263952 master-0 kubenswrapper[7271]: I0313 10:44:51.263872 7271 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79b16eac-51dc-486d-88f0-72fc29a91aa0-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:51.263952 master-0 kubenswrapper[7271]: I0313 10:44:51.263916 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rxh9\" (UniqueName: \"kubernetes.io/projected/79b16eac-51dc-486d-88f0-72fc29a91aa0-kube-api-access-9rxh9\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:51.263952 master-0 kubenswrapper[7271]: I0313 10:44:51.263927 7271 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79b16eac-51dc-486d-88f0-72fc29a91aa0-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:51.263952 master-0 kubenswrapper[7271]: I0313 10:44:51.263936 7271 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b16eac-51dc-486d-88f0-72fc29a91aa0-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:51.314313 master-0 kubenswrapper[7271]: I0313 10:44:51.314233 7271 generic.go:334] "Generic (PLEG): container finished" podID="164610fc-3942-4e85-9f80-a335c9efcc2f" containerID="f14be7a6cc1153177570eba843301e4ca33df79e9b92bb294b1946dcfdfc5aa8" exitCode=0 Mar 13 10:44:51.314313 master-0 kubenswrapper[7271]: I0313 10:44:51.314312 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" event={"ID":"164610fc-3942-4e85-9f80-a335c9efcc2f","Type":"ContainerDied","Data":"f14be7a6cc1153177570eba843301e4ca33df79e9b92bb294b1946dcfdfc5aa8"} Mar 13 10:44:51.314695 master-0 kubenswrapper[7271]: I0313 10:44:51.314347 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" event={"ID":"164610fc-3942-4e85-9f80-a335c9efcc2f","Type":"ContainerDied","Data":"a303a7aee439985c205f48471f9e246f6eeaa905a4431f515631510faa7d2fc6"} Mar 13 10:44:51.314695 master-0 kubenswrapper[7271]: I0313 10:44:51.314368 7271 scope.go:117] "RemoveContainer" containerID="f14be7a6cc1153177570eba843301e4ca33df79e9b92bb294b1946dcfdfc5aa8" Mar 13 10:44:51.314695 master-0 kubenswrapper[7271]: I0313 10:44:51.314472 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b" Mar 13 10:44:51.318660 master-0 kubenswrapper[7271]: I0313 10:44:51.318614 7271 generic.go:334] "Generic (PLEG): container finished" podID="79b16eac-51dc-486d-88f0-72fc29a91aa0" containerID="ce4838a975f3b35227cf7d2639330792b71d35b0b4547f09f09222326bac5463" exitCode=0 Mar 13 10:44:51.318778 master-0 kubenswrapper[7271]: I0313 10:44:51.318718 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" event={"ID":"79b16eac-51dc-486d-88f0-72fc29a91aa0","Type":"ContainerDied","Data":"ce4838a975f3b35227cf7d2639330792b71d35b0b4547f09f09222326bac5463"} Mar 13 10:44:51.318823 master-0 kubenswrapper[7271]: I0313 10:44:51.318788 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" event={"ID":"79b16eac-51dc-486d-88f0-72fc29a91aa0","Type":"ContainerDied","Data":"a969cd9916bc227ecbb885bc70a2df00d168dd97f35b5236c2a8d707004539e1"} Mar 13 10:44:51.318823 master-0 kubenswrapper[7271]: I0313 10:44:51.318676 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6575454847-qh469" Mar 13 10:44:51.349878 master-0 kubenswrapper[7271]: I0313 10:44:51.349832 7271 scope.go:117] "RemoveContainer" containerID="f14be7a6cc1153177570eba843301e4ca33df79e9b92bb294b1946dcfdfc5aa8" Mar 13 10:44:51.350372 master-0 kubenswrapper[7271]: E0313 10:44:51.350336 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14be7a6cc1153177570eba843301e4ca33df79e9b92bb294b1946dcfdfc5aa8\": container with ID starting with f14be7a6cc1153177570eba843301e4ca33df79e9b92bb294b1946dcfdfc5aa8 not found: ID does not exist" containerID="f14be7a6cc1153177570eba843301e4ca33df79e9b92bb294b1946dcfdfc5aa8" Mar 13 10:44:51.350457 master-0 kubenswrapper[7271]: I0313 10:44:51.350380 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14be7a6cc1153177570eba843301e4ca33df79e9b92bb294b1946dcfdfc5aa8"} err="failed to get container status \"f14be7a6cc1153177570eba843301e4ca33df79e9b92bb294b1946dcfdfc5aa8\": rpc error: code = NotFound desc = could not find container \"f14be7a6cc1153177570eba843301e4ca33df79e9b92bb294b1946dcfdfc5aa8\": container with ID starting with f14be7a6cc1153177570eba843301e4ca33df79e9b92bb294b1946dcfdfc5aa8 not found: ID does not exist" Mar 13 10:44:51.350457 master-0 kubenswrapper[7271]: I0313 10:44:51.350413 7271 scope.go:117] "RemoveContainer" containerID="ce4838a975f3b35227cf7d2639330792b71d35b0b4547f09f09222326bac5463" Mar 13 10:44:51.370575 master-0 kubenswrapper[7271]: I0313 10:44:51.370531 7271 scope.go:117] "RemoveContainer" containerID="ce4838a975f3b35227cf7d2639330792b71d35b0b4547f09f09222326bac5463" Mar 13 10:44:51.371292 master-0 kubenswrapper[7271]: E0313 10:44:51.371260 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce4838a975f3b35227cf7d2639330792b71d35b0b4547f09f09222326bac5463\": container with ID starting with ce4838a975f3b35227cf7d2639330792b71d35b0b4547f09f09222326bac5463 not found: ID does not exist" containerID="ce4838a975f3b35227cf7d2639330792b71d35b0b4547f09f09222326bac5463" Mar 13 10:44:51.371362 master-0 kubenswrapper[7271]: I0313 10:44:51.371304 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce4838a975f3b35227cf7d2639330792b71d35b0b4547f09f09222326bac5463"} err="failed to get container status \"ce4838a975f3b35227cf7d2639330792b71d35b0b4547f09f09222326bac5463\": rpc error: code = NotFound desc = could not find container \"ce4838a975f3b35227cf7d2639330792b71d35b0b4547f09f09222326bac5463\": container with ID starting with ce4838a975f3b35227cf7d2639330792b71d35b0b4547f09f09222326bac5463 not found: ID does not exist" Mar 13 10:44:51.371964 master-0 kubenswrapper[7271]: I0313 10:44:51.371747 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b"] Mar 13 10:44:51.375602 master-0 kubenswrapper[7271]: I0313 10:44:51.375539 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-75b4cdcbf-pwt9b"] Mar 13 10:44:51.385273 master-0 kubenswrapper[7271]: I0313 10:44:51.385222 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6575454847-qh469"] Mar 13 10:44:51.388539 master-0 kubenswrapper[7271]: I0313 10:44:51.388508 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6575454847-qh469"] Mar 13 10:44:51.652930 master-0 kubenswrapper[7271]: I0313 10:44:51.652864 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="164610fc-3942-4e85-9f80-a335c9efcc2f" path="/var/lib/kubelet/pods/164610fc-3942-4e85-9f80-a335c9efcc2f/volumes" Mar 13 10:44:51.653488 master-0 kubenswrapper[7271]: I0313 10:44:51.653457 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79b16eac-51dc-486d-88f0-72fc29a91aa0" path="/var/lib/kubelet/pods/79b16eac-51dc-486d-88f0-72fc29a91aa0/volumes" Mar 13 10:44:51.846453 master-0 kubenswrapper[7271]: I0313 10:44:51.846393 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl"] Mar 13 10:44:51.846730 master-0 kubenswrapper[7271]: E0313 10:44:51.846682 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b16eac-51dc-486d-88f0-72fc29a91aa0" containerName="route-controller-manager" Mar 13 10:44:51.846730 master-0 kubenswrapper[7271]: I0313 10:44:51.846696 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b16eac-51dc-486d-88f0-72fc29a91aa0" containerName="route-controller-manager" Mar 13 10:44:51.846730 master-0 kubenswrapper[7271]: E0313 10:44:51.846708 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164610fc-3942-4e85-9f80-a335c9efcc2f" containerName="controller-manager" Mar 13 10:44:51.846730 master-0 kubenswrapper[7271]: I0313 10:44:51.846715 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="164610fc-3942-4e85-9f80-a335c9efcc2f" containerName="controller-manager" Mar 13 10:44:51.846907 master-0 kubenswrapper[7271]: I0313 10:44:51.846841 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="79b16eac-51dc-486d-88f0-72fc29a91aa0" containerName="route-controller-manager" Mar 13 10:44:51.846907 master-0 kubenswrapper[7271]: I0313 10:44:51.846854 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="164610fc-3942-4e85-9f80-a335c9efcc2f" containerName="controller-manager" Mar 13 10:44:51.847306 master-0 kubenswrapper[7271]: I0313 10:44:51.847280 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:51.851243 master-0 kubenswrapper[7271]: I0313 10:44:51.851194 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 10:44:51.851390 master-0 kubenswrapper[7271]: W0313 10:44:51.851248 7271 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'master-0' and this object Mar 13 10:44:51.851390 master-0 kubenswrapper[7271]: W0313 10:44:51.851321 7271 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-fn5mm": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-fn5mm" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'master-0' and this object Mar 13 10:44:51.851390 master-0 kubenswrapper[7271]: E0313 10:44:51.851319 7271 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 10:44:51.851390 master-0 kubenswrapper[7271]: E0313 10:44:51.851368 7271 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-fn5mm\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-fn5mm\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 10:44:51.851390 master-0 kubenswrapper[7271]: W0313 10:44:51.851261 7271 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'master-0' and this object Mar 13 10:44:51.851647 master-0 kubenswrapper[7271]: E0313 10:44:51.851405 7271 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 10:44:51.851647 master-0 kubenswrapper[7271]: W0313 10:44:51.851449 7271 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'master-0' and this object Mar 13 10:44:51.851647 master-0 kubenswrapper[7271]: E0313 10:44:51.851471 7271 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 13 10:44:51.851774 master-0 kubenswrapper[7271]: I0313 10:44:51.851657 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 10:44:51.855201 master-0 kubenswrapper[7271]: I0313 10:44:51.855132 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-867876d6b6-tpq67"] Mar 13 10:44:51.856826 master-0 kubenswrapper[7271]: I0313 10:44:51.856791 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.860842 master-0 kubenswrapper[7271]: I0313 10:44:51.860799 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 10:44:51.861373 master-0 kubenswrapper[7271]: I0313 10:44:51.861271 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 10:44:51.862170 master-0 kubenswrapper[7271]: I0313 10:44:51.862108 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl"] Mar 13 10:44:51.866215 master-0 kubenswrapper[7271]: I0313 10:44:51.866157 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 10:44:51.866924 master-0 kubenswrapper[7271]: I0313 10:44:51.866884 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 10:44:51.866993 master-0 kubenswrapper[7271]: I0313 10:44:51.866938 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 10:44:51.866993 master-0 kubenswrapper[7271]: I0313 10:44:51.866960 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-l5tkf" Mar 13 10:44:51.869556 master-0 kubenswrapper[7271]: I0313 10:44:51.869516 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 10:44:51.870794 master-0 kubenswrapper[7271]: I0313 10:44:51.870758 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.870892 master-0 kubenswrapper[7271]: I0313 10:44:51.870857 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.870938 master-0 kubenswrapper[7271]: I0313 10:44:51.870917 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:51.870975 master-0 kubenswrapper[7271]: I0313 10:44:51.870955 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:51.871056 master-0 kubenswrapper[7271]: I0313 10:44:51.871024 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.871094 master-0 kubenswrapper[7271]: I0313 10:44:51.871079 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5656\" (UniqueName: \"kubernetes.io/projected/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-kube-api-access-f5656\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.871144 master-0 kubenswrapper[7271]: I0313 10:44:51.871121 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:51.871187 master-0 kubenswrapper[7271]: I0313 10:44:51.871159 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.871251 master-0 kubenswrapper[7271]: I0313 10:44:51.871219 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcb99\" (UniqueName: \"kubernetes.io/projected/c09f42db-e6d7-469d-9761-88a879f6aa6b-kube-api-access-mcb99\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:51.881668 master-0 kubenswrapper[7271]: I0313 10:44:51.880079 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-867876d6b6-tpq67"] Mar 13 10:44:51.885049 master-0 kubenswrapper[7271]: I0313 10:44:51.882195 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:44:51.885049 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:44:51.885049 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:44:51.885049 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:44:51.885049 master-0 kubenswrapper[7271]: I0313 10:44:51.882254 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:44:51.885049 master-0 kubenswrapper[7271]: I0313 10:44:51.882327 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:44:51.885049 master-0 kubenswrapper[7271]: I0313 10:44:51.883096 7271 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"9aabceaa9098fa374fa3be7884e41fb57131871ca89880498f237e8d19971731"} pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" containerMessage="Container router failed startup probe, will be restarted" Mar 13 10:44:51.885049 master-0 kubenswrapper[7271]: I0313 10:44:51.883133 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" containerID="cri-o://9aabceaa9098fa374fa3be7884e41fb57131871ca89880498f237e8d19971731" gracePeriod=3600 Mar 13 10:44:51.972729 master-0 kubenswrapper[7271]: I0313 10:44:51.972665 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:51.972729 master-0 kubenswrapper[7271]: I0313 10:44:51.972729 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:51.973304 master-0 kubenswrapper[7271]: I0313 10:44:51.972755 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.973304 master-0 kubenswrapper[7271]: I0313 10:44:51.972863 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5656\" (UniqueName: \"kubernetes.io/projected/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-kube-api-access-f5656\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.973304 master-0 kubenswrapper[7271]: I0313 10:44:51.973162 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:51.973304 master-0 kubenswrapper[7271]: I0313 10:44:51.973193 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.973483 master-0 kubenswrapper[7271]: I0313 10:44:51.973443 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcb99\" (UniqueName: \"kubernetes.io/projected/c09f42db-e6d7-469d-9761-88a879f6aa6b-kube-api-access-mcb99\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:51.973533 master-0 kubenswrapper[7271]: I0313 10:44:51.973520 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.973566 master-0 kubenswrapper[7271]: I0313 10:44:51.973554 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.974387 master-0 kubenswrapper[7271]: I0313 10:44:51.974353 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.975050 master-0 kubenswrapper[7271]: I0313 10:44:51.975007 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.975111 master-0 kubenswrapper[7271]: I0313 10:44:51.975064 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.976344 master-0 kubenswrapper[7271]: I0313 10:44:51.976309 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:51.989648 master-0 kubenswrapper[7271]: I0313 10:44:51.989578 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcb99\" (UniqueName: \"kubernetes.io/projected/c09f42db-e6d7-469d-9761-88a879f6aa6b-kube-api-access-mcb99\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:51.990042 master-0 kubenswrapper[7271]: I0313 10:44:51.990010 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5656\" (UniqueName: \"kubernetes.io/projected/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-kube-api-access-f5656\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:52.169932 master-0 kubenswrapper[7271]: E0313 10:44:52.169779 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:44:52.171159 master-0 kubenswrapper[7271]: E0313 10:44:52.171112 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:44:52.172562 master-0 kubenswrapper[7271]: E0313 10:44:52.172537 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:44:52.172689 master-0 kubenswrapper[7271]: E0313 10:44:52.172666 7271 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" podUID="d1100866-59a5-4653-b8eb-7945515ae057" containerName="kube-multus-additional-cni-plugins" Mar 13 10:44:52.219634 master-0 kubenswrapper[7271]: I0313 10:44:52.219544 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:52.611061 master-0 kubenswrapper[7271]: I0313 10:44:52.610976 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-867876d6b6-tpq67"] Mar 13 10:44:52.806141 master-0 kubenswrapper[7271]: I0313 10:44:52.806095 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 10:44:52.818509 master-0 kubenswrapper[7271]: I0313 10:44:52.818457 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:52.973681 master-0 kubenswrapper[7271]: E0313 10:44:52.973516 7271 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:44:52.974346 master-0 kubenswrapper[7271]: E0313 10:44:52.974331 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config podName:c09f42db-e6d7-469d-9761-88a879f6aa6b nodeName:}" failed. No retries permitted until 2026-03-13 10:44:53.474310404 +0000 UTC m=+548.001132794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config") pod "route-controller-manager-7d9bd68fd6-lwnzl" (UID: "c09f42db-e6d7-469d-9761-88a879f6aa6b") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:44:52.974458 master-0 kubenswrapper[7271]: E0313 10:44:52.973557 7271 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:44:52.974543 master-0 kubenswrapper[7271]: E0313 10:44:52.974532 7271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca podName:c09f42db-e6d7-469d-9761-88a879f6aa6b nodeName:}" failed. No retries permitted until 2026-03-13 10:44:53.47451969 +0000 UTC m=+548.001342080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca") pod "route-controller-manager-7d9bd68fd6-lwnzl" (UID: "c09f42db-e6d7-469d-9761-88a879f6aa6b") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:44:52.992460 master-0 kubenswrapper[7271]: I0313 10:44:52.992394 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-fn5mm" Mar 13 10:44:53.086689 master-0 kubenswrapper[7271]: I0313 10:44:53.086634 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 10:44:53.337625 master-0 kubenswrapper[7271]: I0313 10:44:53.337548 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" event={"ID":"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc","Type":"ContainerStarted","Data":"3e37f1f22df5c284c9d1ba661521c6c1d227be08ffa00372db4208f240cca432"} Mar 13 10:44:53.337625 master-0 kubenswrapper[7271]: I0313 10:44:53.337607 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" event={"ID":"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc","Type":"ContainerStarted","Data":"dd93ec4fe47e71fd21c0051085976706d225fa5cba2fcde1e22ce417bdc6d6e7"} Mar 13 10:44:53.338109 master-0 kubenswrapper[7271]: I0313 10:44:53.338067 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:53.342522 master-0 kubenswrapper[7271]: I0313 10:44:53.342488 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:44:53.361501 master-0 kubenswrapper[7271]: I0313 10:44:53.361383 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" podStartSLOduration=3.36136171 podStartE2EDuration="3.36136171s" podCreationTimestamp="2026-03-13 10:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:44:53.355398157 +0000 UTC m=+547.882220597" watchObservedRunningTime="2026-03-13 10:44:53.36136171 +0000 UTC m=+547.888184100" Mar 13 10:44:53.389521 master-0 kubenswrapper[7271]: I0313 10:44:53.389472 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 10:44:53.441673 master-0 kubenswrapper[7271]: I0313 10:44:53.436140 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 10:44:53.441673 master-0 kubenswrapper[7271]: I0313 10:44:53.436972 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 10:44:53.444894 master-0 kubenswrapper[7271]: I0313 10:44:53.444514 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 10:44:53.494978 master-0 kubenswrapper[7271]: I0313 10:44:53.494918 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:53.494978 master-0 kubenswrapper[7271]: I0313 10:44:53.494986 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a55a2a95-178c-4fcd-9866-3a149948d1d3-kube-api-access\") pod \"installer-2-master-0\" (UID: \"a55a2a95-178c-4fcd-9866-3a149948d1d3\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 10:44:53.495232 master-0 kubenswrapper[7271]: I0313 10:44:53.495006 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a55a2a95-178c-4fcd-9866-3a149948d1d3-var-lock\") pod \"installer-2-master-0\" (UID: \"a55a2a95-178c-4fcd-9866-3a149948d1d3\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 10:44:53.495232 master-0 kubenswrapper[7271]: I0313 10:44:53.495049 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:53.495232 master-0 kubenswrapper[7271]: I0313 10:44:53.495064 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a55a2a95-178c-4fcd-9866-3a149948d1d3-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"a55a2a95-178c-4fcd-9866-3a149948d1d3\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 10:44:53.495935 master-0 kubenswrapper[7271]: I0313 10:44:53.495916 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:53.496174 master-0 kubenswrapper[7271]: I0313 10:44:53.496147 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:53.596282 master-0 kubenswrapper[7271]: I0313 10:44:53.596109 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a55a2a95-178c-4fcd-9866-3a149948d1d3-kube-api-access\") pod \"installer-2-master-0\" (UID: \"a55a2a95-178c-4fcd-9866-3a149948d1d3\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 10:44:53.596282 master-0 kubenswrapper[7271]: I0313 10:44:53.596189 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a55a2a95-178c-4fcd-9866-3a149948d1d3-var-lock\") pod \"installer-2-master-0\" (UID: \"a55a2a95-178c-4fcd-9866-3a149948d1d3\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 10:44:53.596282 master-0 kubenswrapper[7271]: I0313 10:44:53.596258 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a55a2a95-178c-4fcd-9866-3a149948d1d3-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"a55a2a95-178c-4fcd-9866-3a149948d1d3\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 10:44:53.596521 master-0 kubenswrapper[7271]: I0313 10:44:53.596348 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a55a2a95-178c-4fcd-9866-3a149948d1d3-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"a55a2a95-178c-4fcd-9866-3a149948d1d3\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 10:44:53.596521 master-0 kubenswrapper[7271]: I0313 10:44:53.596413 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a55a2a95-178c-4fcd-9866-3a149948d1d3-var-lock\") pod \"installer-2-master-0\" (UID: \"a55a2a95-178c-4fcd-9866-3a149948d1d3\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 10:44:53.615393 master-0 kubenswrapper[7271]: I0313 10:44:53.615361 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a55a2a95-178c-4fcd-9866-3a149948d1d3-kube-api-access\") pod \"installer-2-master-0\" (UID: \"a55a2a95-178c-4fcd-9866-3a149948d1d3\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 10:44:53.673441 master-0 kubenswrapper[7271]: I0313 10:44:53.673397 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:53.770246 master-0 kubenswrapper[7271]: I0313 10:44:53.770151 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 10:44:54.085710 master-0 kubenswrapper[7271]: I0313 10:44:54.085636 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl"] Mar 13 10:44:54.203609 master-0 kubenswrapper[7271]: I0313 10:44:54.203535 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 10:44:54.348238 master-0 kubenswrapper[7271]: I0313 10:44:54.348137 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"a55a2a95-178c-4fcd-9866-3a149948d1d3","Type":"ContainerStarted","Data":"07760884fb73f623d10ed12cbe3f37005e2db59b258a61a52af5d3fc8c6b9063"} Mar 13 10:44:54.351138 master-0 kubenswrapper[7271]: I0313 10:44:54.351088 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" event={"ID":"c09f42db-e6d7-469d-9761-88a879f6aa6b","Type":"ContainerStarted","Data":"a8df02bb41a45b57cf8e71e70880ad2fbf324a4b46f7ee5205697f332f790983"} Mar 13 10:44:54.351138 master-0 kubenswrapper[7271]: I0313 10:44:54.351115 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" event={"ID":"c09f42db-e6d7-469d-9761-88a879f6aa6b","Type":"ContainerStarted","Data":"2baa20e270e178f3e40e4ef86226c93b0ff3020bf6dac2cb5d4f63eecde92557"} Mar 13 10:44:54.351661 master-0 kubenswrapper[7271]: I0313 10:44:54.351572 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:54.376595 master-0 kubenswrapper[7271]: I0313 10:44:54.376405 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" podStartSLOduration=4.376371834 podStartE2EDuration="4.376371834s" podCreationTimestamp="2026-03-13 10:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:44:54.371494742 +0000 UTC m=+548.898317132" watchObservedRunningTime="2026-03-13 10:44:54.376371834 +0000 UTC m=+548.903194224" Mar 13 10:44:54.548502 master-0 kubenswrapper[7271]: I0313 10:44:54.548422 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:44:55.201427 master-0 kubenswrapper[7271]: I0313 10:44:55.201378 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-vpnmf_d1100866-59a5-4653-b8eb-7945515ae057/kube-multus-additional-cni-plugins/0.log" Mar 13 10:44:55.202147 master-0 kubenswrapper[7271]: I0313 10:44:55.201451 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:55.302495 master-0 kubenswrapper[7271]: E0313 10:44:55.302430 7271 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod164610fc_3942_4e85_9f80_a335c9efcc2f.slice/crio-a303a7aee439985c205f48471f9e246f6eeaa905a4431f515631510faa7d2fc6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1100866_59a5_4653_b8eb_7945515ae057.slice/crio-77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1100866_59a5_4653_b8eb_7945515ae057.slice/crio-conmon-77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034.scope\": RecentStats: unable to find data in memory cache]" Mar 13 10:44:55.336562 master-0 kubenswrapper[7271]: I0313 10:44:55.336409 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d1100866-59a5-4653-b8eb-7945515ae057-cni-sysctl-allowlist\") pod \"d1100866-59a5-4653-b8eb-7945515ae057\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " Mar 13 10:44:55.336766 master-0 kubenswrapper[7271]: I0313 10:44:55.336562 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d1100866-59a5-4653-b8eb-7945515ae057-ready\") pod \"d1100866-59a5-4653-b8eb-7945515ae057\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " Mar 13 10:44:55.336974 master-0 kubenswrapper[7271]: I0313 10:44:55.336944 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvr2l\" (UniqueName: \"kubernetes.io/projected/d1100866-59a5-4653-b8eb-7945515ae057-kube-api-access-kvr2l\") pod \"d1100866-59a5-4653-b8eb-7945515ae057\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " Mar 13 10:44:55.337133 master-0 kubenswrapper[7271]: I0313 10:44:55.337118 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d1100866-59a5-4653-b8eb-7945515ae057-tuning-conf-dir\") pod \"d1100866-59a5-4653-b8eb-7945515ae057\" (UID: \"d1100866-59a5-4653-b8eb-7945515ae057\") " Mar 13 10:44:55.337215 master-0 kubenswrapper[7271]: I0313 10:44:55.337185 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1100866-59a5-4653-b8eb-7945515ae057-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "d1100866-59a5-4653-b8eb-7945515ae057" (UID: "d1100866-59a5-4653-b8eb-7945515ae057"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:44:55.337493 master-0 kubenswrapper[7271]: I0313 10:44:55.337445 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1100866-59a5-4653-b8eb-7945515ae057-ready" (OuterVolumeSpecName: "ready") pod "d1100866-59a5-4653-b8eb-7945515ae057" (UID: "d1100866-59a5-4653-b8eb-7945515ae057"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:44:55.337558 master-0 kubenswrapper[7271]: I0313 10:44:55.337509 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1100866-59a5-4653-b8eb-7945515ae057-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "d1100866-59a5-4653-b8eb-7945515ae057" (UID: "d1100866-59a5-4653-b8eb-7945515ae057"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:44:55.338030 master-0 kubenswrapper[7271]: I0313 10:44:55.338008 7271 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d1100866-59a5-4653-b8eb-7945515ae057-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:55.338117 master-0 kubenswrapper[7271]: I0313 10:44:55.338104 7271 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d1100866-59a5-4653-b8eb-7945515ae057-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:55.338182 master-0 kubenswrapper[7271]: I0313 10:44:55.338173 7271 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d1100866-59a5-4653-b8eb-7945515ae057-ready\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:55.343629 master-0 kubenswrapper[7271]: I0313 10:44:55.343527 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1100866-59a5-4653-b8eb-7945515ae057-kube-api-access-kvr2l" (OuterVolumeSpecName: "kube-api-access-kvr2l") pod "d1100866-59a5-4653-b8eb-7945515ae057" (UID: "d1100866-59a5-4653-b8eb-7945515ae057"). InnerVolumeSpecName "kube-api-access-kvr2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:44:55.363311 master-0 kubenswrapper[7271]: I0313 10:44:55.363240 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"a55a2a95-178c-4fcd-9866-3a149948d1d3","Type":"ContainerStarted","Data":"1095e539909ae9e46360f463a967bbc617daeb2d47612ebdc2519683e6fd658c"} Mar 13 10:44:55.368298 master-0 kubenswrapper[7271]: I0313 10:44:55.368142 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-vpnmf_d1100866-59a5-4653-b8eb-7945515ae057/kube-multus-additional-cni-plugins/0.log" Mar 13 10:44:55.368298 master-0 kubenswrapper[7271]: I0313 10:44:55.368207 7271 generic.go:334] "Generic (PLEG): container finished" podID="d1100866-59a5-4653-b8eb-7945515ae057" containerID="77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034" exitCode=137 Mar 13 10:44:55.368785 master-0 kubenswrapper[7271]: I0313 10:44:55.368725 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" event={"ID":"d1100866-59a5-4653-b8eb-7945515ae057","Type":"ContainerDied","Data":"77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034"} Mar 13 10:44:55.368840 master-0 kubenswrapper[7271]: I0313 10:44:55.368799 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" event={"ID":"d1100866-59a5-4653-b8eb-7945515ae057","Type":"ContainerDied","Data":"ae954fdd1298594ecbfea8f7764251c1b6f5d4b103893590537173967636deb0"} Mar 13 10:44:55.368840 master-0 kubenswrapper[7271]: I0313 10:44:55.368830 7271 scope.go:117] "RemoveContainer" containerID="77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034" Mar 13 10:44:55.369020 master-0 kubenswrapper[7271]: I0313 10:44:55.368996 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vpnmf" Mar 13 10:44:55.392698 master-0 kubenswrapper[7271]: I0313 10:44:55.392651 7271 scope.go:117] "RemoveContainer" containerID="77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034" Mar 13 10:44:55.393343 master-0 kubenswrapper[7271]: E0313 10:44:55.393304 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034\": container with ID starting with 77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034 not found: ID does not exist" containerID="77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034" Mar 13 10:44:55.393460 master-0 kubenswrapper[7271]: I0313 10:44:55.393432 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034"} err="failed to get container status \"77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034\": rpc error: code = NotFound desc = could not find container \"77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034\": container with ID starting with 77e91ea71121646378f89870c07eee9855aebbfb8c6930903ac5dc5be5a2c034 not found: ID does not exist" Mar 13 10:44:55.395366 master-0 kubenswrapper[7271]: I0313 10:44:55.395299 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.395280966 podStartE2EDuration="2.395280966s" podCreationTimestamp="2026-03-13 10:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:44:55.383522127 +0000 UTC m=+549.910344517" watchObservedRunningTime="2026-03-13 10:44:55.395280966 +0000 UTC m=+549.922103346" Mar 13 10:44:55.410400 master-0 kubenswrapper[7271]: I0313 10:44:55.410350 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vpnmf"] Mar 13 10:44:55.417489 master-0 kubenswrapper[7271]: I0313 10:44:55.417424 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vpnmf"] Mar 13 10:44:55.441381 master-0 kubenswrapper[7271]: I0313 10:44:55.441204 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvr2l\" (UniqueName: \"kubernetes.io/projected/d1100866-59a5-4653-b8eb-7945515ae057-kube-api-access-kvr2l\") on node \"master-0\" DevicePath \"\"" Mar 13 10:44:55.656120 master-0 kubenswrapper[7271]: I0313 10:44:55.656013 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1100866-59a5-4653-b8eb-7945515ae057" path="/var/lib/kubelet/pods/d1100866-59a5-4653-b8eb-7945515ae057/volumes" Mar 13 10:44:59.045995 master-0 kubenswrapper[7271]: I0313 10:44:59.045933 7271 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 13 10:44:59.046622 master-0 kubenswrapper[7271]: I0313 10:44:59.046306 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" containerID="cri-o://1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466" gracePeriod=30 Mar 13 10:44:59.046622 master-0 kubenswrapper[7271]: I0313 10:44:59.046358 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" containerID="cri-o://a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9" gracePeriod=30 Mar 13 10:44:59.046622 master-0 kubenswrapper[7271]: I0313 10:44:59.046440 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" containerID="cri-o://ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b" gracePeriod=30 Mar 13 10:44:59.046622 master-0 kubenswrapper[7271]: I0313 10:44:59.046391 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" containerID="cri-o://3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb" gracePeriod=30 Mar 13 10:44:59.046622 master-0 kubenswrapper[7271]: I0313 10:44:59.046424 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" containerID="cri-o://cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597" gracePeriod=30 Mar 13 10:44:59.068095 master-0 kubenswrapper[7271]: I0313 10:44:59.068062 7271 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 13 10:44:59.068535 master-0 kubenswrapper[7271]: E0313 10:44:59.068518 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 13 10:44:59.068633 master-0 kubenswrapper[7271]: I0313 10:44:59.068623 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 13 10:44:59.068724 master-0 kubenswrapper[7271]: E0313 10:44:59.068712 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 13 10:44:59.068822 master-0 kubenswrapper[7271]: I0313 10:44:59.068811 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 13 10:44:59.068886 master-0 kubenswrapper[7271]: E0313 10:44:59.068877 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 13 10:44:59.068941 master-0 kubenswrapper[7271]: I0313 10:44:59.068932 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 13 10:44:59.069003 master-0 kubenswrapper[7271]: E0313 10:44:59.068993 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 13 10:44:59.069061 master-0 kubenswrapper[7271]: I0313 10:44:59.069052 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 13 10:44:59.069148 master-0 kubenswrapper[7271]: E0313 10:44:59.069138 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 13 10:44:59.069206 master-0 kubenswrapper[7271]: I0313 10:44:59.069197 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 13 10:44:59.069267 master-0 kubenswrapper[7271]: E0313 10:44:59.069258 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 13 10:44:59.069321 master-0 kubenswrapper[7271]: I0313 10:44:59.069313 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 13 10:44:59.069381 master-0 kubenswrapper[7271]: E0313 10:44:59.069372 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1100866-59a5-4653-b8eb-7945515ae057" containerName="kube-multus-additional-cni-plugins" Mar 13 10:44:59.069436 master-0 kubenswrapper[7271]: I0313 10:44:59.069427 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1100866-59a5-4653-b8eb-7945515ae057" containerName="kube-multus-additional-cni-plugins" Mar 13 10:44:59.069499 master-0 kubenswrapper[7271]: E0313 10:44:59.069490 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 13 10:44:59.069555 master-0 kubenswrapper[7271]: I0313 10:44:59.069546 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 13 10:44:59.069631 master-0 kubenswrapper[7271]: E0313 10:44:59.069622 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 13 10:44:59.069690 master-0 kubenswrapper[7271]: I0313 10:44:59.069682 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 13 10:44:59.069876 master-0 kubenswrapper[7271]: I0313 10:44:59.069864 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 13 10:44:59.069946 master-0 kubenswrapper[7271]: I0313 10:44:59.069936 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1100866-59a5-4653-b8eb-7945515ae057" containerName="kube-multus-additional-cni-plugins" Mar 13 10:44:59.070009 master-0 kubenswrapper[7271]: I0313 10:44:59.069999 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 13 10:44:59.070076 master-0 kubenswrapper[7271]: I0313 10:44:59.070067 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 13 10:44:59.070131 master-0 kubenswrapper[7271]: I0313 10:44:59.070122 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 13 10:44:59.070187 master-0 kubenswrapper[7271]: I0313 10:44:59.070179 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 13 10:44:59.103901 master-0 kubenswrapper[7271]: I0313 10:44:59.103835 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.104027 master-0 kubenswrapper[7271]: I0313 10:44:59.103950 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.104027 master-0 kubenswrapper[7271]: I0313 10:44:59.104010 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.104095 master-0 kubenswrapper[7271]: I0313 10:44:59.104053 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.104128 master-0 kubenswrapper[7271]: I0313 10:44:59.104093 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.104197 master-0 kubenswrapper[7271]: I0313 10:44:59.104169 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.134167 master-0 kubenswrapper[7271]: E0313 10:44:59.134041 7271 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod164610fc_3942_4e85_9f80_a335c9efcc2f.slice/crio-a303a7aee439985c205f48471f9e246f6eeaa905a4431f515631510faa7d2fc6\": RecentStats: unable to find data in memory cache]" Mar 13 10:44:59.205852 master-0 kubenswrapper[7271]: I0313 10:44:59.205804 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.205958 master-0 kubenswrapper[7271]: I0313 10:44:59.205860 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.205958 master-0 kubenswrapper[7271]: I0313 10:44:59.205881 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.205958 master-0 kubenswrapper[7271]: I0313 10:44:59.205923 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.206257 master-0 kubenswrapper[7271]: I0313 10:44:59.206027 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.206311 master-0 kubenswrapper[7271]: I0313 10:44:59.206145 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.206350 master-0 kubenswrapper[7271]: I0313 10:44:59.206142 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.206458 master-0 kubenswrapper[7271]: I0313 10:44:59.206408 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.206524 master-0 kubenswrapper[7271]: I0313 10:44:59.206161 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.206606 master-0 kubenswrapper[7271]: I0313 10:44:59.206227 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.206690 master-0 kubenswrapper[7271]: I0313 10:44:59.206576 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.206760 master-0 kubenswrapper[7271]: I0313 10:44:59.206205 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:44:59.404975 master-0 kubenswrapper[7271]: I0313 10:44:59.404939 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 10:44:59.406262 master-0 kubenswrapper[7271]: I0313 10:44:59.406225 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 10:44:59.408566 master-0 kubenswrapper[7271]: I0313 10:44:59.408515 7271 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb" exitCode=2 Mar 13 10:44:59.408566 master-0 kubenswrapper[7271]: I0313 10:44:59.408547 7271 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597" exitCode=0 Mar 13 10:44:59.408566 master-0 kubenswrapper[7271]: I0313 10:44:59.408559 7271 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9" exitCode=2 Mar 13 10:45:04.454616 master-0 kubenswrapper[7271]: I0313 10:45:04.454490 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-d787l_95339220-324d-45e7-bdc2-e4f42fbd1d32/multus-admission-controller/0.log" Mar 13 10:45:04.455306 master-0 kubenswrapper[7271]: I0313 10:45:04.454637 7271 generic.go:334] "Generic (PLEG): container finished" podID="95339220-324d-45e7-bdc2-e4f42fbd1d32" containerID="6ac08771019787a7c11813b1fc15b8b6c6e6e35ed0a49a438a259a987603471f" exitCode=137 Mar 13 10:45:04.455306 master-0 kubenswrapper[7271]: I0313 10:45:04.454709 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" event={"ID":"95339220-324d-45e7-bdc2-e4f42fbd1d32","Type":"ContainerDied","Data":"6ac08771019787a7c11813b1fc15b8b6c6e6e35ed0a49a438a259a987603471f"} Mar 13 10:45:05.101787 master-0 kubenswrapper[7271]: I0313 10:45:05.101709 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-d787l_95339220-324d-45e7-bdc2-e4f42fbd1d32/multus-admission-controller/0.log" Mar 13 10:45:05.102114 master-0 kubenswrapper[7271]: I0313 10:45:05.101817 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:45:05.240846 master-0 kubenswrapper[7271]: I0313 10:45:05.240806 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") pod \"95339220-324d-45e7-bdc2-e4f42fbd1d32\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " Mar 13 10:45:05.241206 master-0 kubenswrapper[7271]: I0313 10:45:05.241189 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j59zw\" (UniqueName: \"kubernetes.io/projected/95339220-324d-45e7-bdc2-e4f42fbd1d32-kube-api-access-j59zw\") pod \"95339220-324d-45e7-bdc2-e4f42fbd1d32\" (UID: \"95339220-324d-45e7-bdc2-e4f42fbd1d32\") " Mar 13 10:45:05.244917 master-0 kubenswrapper[7271]: I0313 10:45:05.244836 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95339220-324d-45e7-bdc2-e4f42fbd1d32-kube-api-access-j59zw" (OuterVolumeSpecName: "kube-api-access-j59zw") pod "95339220-324d-45e7-bdc2-e4f42fbd1d32" (UID: "95339220-324d-45e7-bdc2-e4f42fbd1d32"). InnerVolumeSpecName "kube-api-access-j59zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:45:05.246072 master-0 kubenswrapper[7271]: I0313 10:45:05.245968 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "95339220-324d-45e7-bdc2-e4f42fbd1d32" (UID: "95339220-324d-45e7-bdc2-e4f42fbd1d32"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:45:05.343575 master-0 kubenswrapper[7271]: I0313 10:45:05.343470 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j59zw\" (UniqueName: \"kubernetes.io/projected/95339220-324d-45e7-bdc2-e4f42fbd1d32-kube-api-access-j59zw\") on node \"master-0\" DevicePath \"\"" Mar 13 10:45:05.343575 master-0 kubenswrapper[7271]: I0313 10:45:05.343567 7271 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/95339220-324d-45e7-bdc2-e4f42fbd1d32-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 10:45:05.465520 master-0 kubenswrapper[7271]: I0313 10:45:05.465456 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-d787l_95339220-324d-45e7-bdc2-e4f42fbd1d32/multus-admission-controller/0.log" Mar 13 10:45:05.466219 master-0 kubenswrapper[7271]: I0313 10:45:05.465560 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" event={"ID":"95339220-324d-45e7-bdc2-e4f42fbd1d32","Type":"ContainerDied","Data":"64faab925d07fb80bd4ae56d2309ec92e60b31ddda32859daa4f5dfef61fdcc5"} Mar 13 10:45:05.466219 master-0 kubenswrapper[7271]: I0313 10:45:05.465667 7271 scope.go:117] "RemoveContainer" containerID="b8b86d02f4f86b49f256fe88515a474a9fb718a6bd218f138f4504fc8b7c89fc" Mar 13 10:45:05.466219 master-0 kubenswrapper[7271]: I0313 10:45:05.465768 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" Mar 13 10:45:05.487187 master-0 kubenswrapper[7271]: I0313 10:45:05.486419 7271 scope.go:117] "RemoveContainer" containerID="6ac08771019787a7c11813b1fc15b8b6c6e6e35ed0a49a438a259a987603471f" Mar 13 10:45:05.495258 master-0 kubenswrapper[7271]: E0313 10:45:05.495194 7271 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod164610fc_3942_4e85_9f80_a335c9efcc2f.slice/crio-a303a7aee439985c205f48471f9e246f6eeaa905a4431f515631510faa7d2fc6\": RecentStats: unable to find data in memory cache]" Mar 13 10:45:12.532421 master-0 kubenswrapper[7271]: I0313 10:45:12.532134 7271 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="6b8cee904e554093314cde9eee6c22eacab9d7c222d3e342258752f9f92f0479" exitCode=1 Mar 13 10:45:12.532421 master-0 kubenswrapper[7271]: I0313 10:45:12.532239 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"6b8cee904e554093314cde9eee6c22eacab9d7c222d3e342258752f9f92f0479"} Mar 13 10:45:12.532421 master-0 kubenswrapper[7271]: I0313 10:45:12.532369 7271 scope.go:117] "RemoveContainer" containerID="281e47a8ccfe9b7bd7d1fae86c8e235e63f17e9935336f3e6ad3bed18be23300" Mar 13 10:45:12.534029 master-0 kubenswrapper[7271]: I0313 10:45:12.533489 7271 scope.go:117] "RemoveContainer" containerID="6b8cee904e554093314cde9eee6c22eacab9d7c222d3e342258752f9f92f0479" Mar 13 10:45:12.534161 master-0 kubenswrapper[7271]: E0313 10:45:12.534064 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:45:13.542822 master-0 kubenswrapper[7271]: I0313 10:45:13.542578 7271 generic.go:334] "Generic (PLEG): container finished" podID="1769d48d-7ef0-48ee-9b7d-b46151ae5df6" containerID="5399579cbf50883dcc4aa7699616e64f69ad85ad80602aae96557b44afc05a5a" exitCode=0 Mar 13 10:45:13.543466 master-0 kubenswrapper[7271]: I0313 10:45:13.542789 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"1769d48d-7ef0-48ee-9b7d-b46151ae5df6","Type":"ContainerDied","Data":"5399579cbf50883dcc4aa7699616e64f69ad85ad80602aae96557b44afc05a5a"} Mar 13 10:45:13.796069 master-0 kubenswrapper[7271]: I0313 10:45:13.795875 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:45:13.797240 master-0 kubenswrapper[7271]: I0313 10:45:13.797173 7271 scope.go:117] "RemoveContainer" containerID="6b8cee904e554093314cde9eee6c22eacab9d7c222d3e342258752f9f92f0479" Mar 13 10:45:13.797829 master-0 kubenswrapper[7271]: E0313 10:45:13.797757 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:45:14.313398 master-0 kubenswrapper[7271]: E0313 10:45:14.313222 7271 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod164610fc_3942_4e85_9f80_a335c9efcc2f.slice/crio-a303a7aee439985c205f48471f9e246f6eeaa905a4431f515631510faa7d2fc6\": RecentStats: unable to find data in memory cache]" Mar 13 10:45:14.458830 master-0 kubenswrapper[7271]: I0313 10:45:14.458768 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_b3bcb671-5236-49fb-8540-131f18b91fc3/installer/0.log" Mar 13 10:45:14.459078 master-0 kubenswrapper[7271]: I0313 10:45:14.458845 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 10:45:14.528887 master-0 kubenswrapper[7271]: I0313 10:45:14.528823 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3bcb671-5236-49fb-8540-131f18b91fc3-var-lock\") pod \"b3bcb671-5236-49fb-8540-131f18b91fc3\" (UID: \"b3bcb671-5236-49fb-8540-131f18b91fc3\") " Mar 13 10:45:14.528887 master-0 kubenswrapper[7271]: I0313 10:45:14.528891 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3bcb671-5236-49fb-8540-131f18b91fc3-kube-api-access\") pod \"b3bcb671-5236-49fb-8540-131f18b91fc3\" (UID: \"b3bcb671-5236-49fb-8540-131f18b91fc3\") " Mar 13 10:45:14.529136 master-0 kubenswrapper[7271]: I0313 10:45:14.528921 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3bcb671-5236-49fb-8540-131f18b91fc3-kubelet-dir\") pod \"b3bcb671-5236-49fb-8540-131f18b91fc3\" (UID: \"b3bcb671-5236-49fb-8540-131f18b91fc3\") " Mar 13 10:45:14.529321 master-0 kubenswrapper[7271]: I0313 10:45:14.529162 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3bcb671-5236-49fb-8540-131f18b91fc3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b3bcb671-5236-49fb-8540-131f18b91fc3" (UID: "b3bcb671-5236-49fb-8540-131f18b91fc3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:45:14.529321 master-0 kubenswrapper[7271]: I0313 10:45:14.529261 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3bcb671-5236-49fb-8540-131f18b91fc3-var-lock" (OuterVolumeSpecName: "var-lock") pod "b3bcb671-5236-49fb-8540-131f18b91fc3" (UID: "b3bcb671-5236-49fb-8540-131f18b91fc3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:45:14.531753 master-0 kubenswrapper[7271]: I0313 10:45:14.531713 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3bcb671-5236-49fb-8540-131f18b91fc3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b3bcb671-5236-49fb-8540-131f18b91fc3" (UID: "b3bcb671-5236-49fb-8540-131f18b91fc3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:45:14.552820 master-0 kubenswrapper[7271]: I0313 10:45:14.552774 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_b3bcb671-5236-49fb-8540-131f18b91fc3/installer/0.log" Mar 13 10:45:14.553277 master-0 kubenswrapper[7271]: I0313 10:45:14.552840 7271 generic.go:334] "Generic (PLEG): container finished" podID="b3bcb671-5236-49fb-8540-131f18b91fc3" containerID="ff4d81f34f5a41e743d1b5d70a02b2768b8ec6e13c4ca20cdad80babd9b85b66" exitCode=1 Mar 13 10:45:14.553277 master-0 kubenswrapper[7271]: I0313 10:45:14.552907 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 10:45:14.553277 master-0 kubenswrapper[7271]: I0313 10:45:14.552940 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"b3bcb671-5236-49fb-8540-131f18b91fc3","Type":"ContainerDied","Data":"ff4d81f34f5a41e743d1b5d70a02b2768b8ec6e13c4ca20cdad80babd9b85b66"} Mar 13 10:45:14.553277 master-0 kubenswrapper[7271]: I0313 10:45:14.552967 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"b3bcb671-5236-49fb-8540-131f18b91fc3","Type":"ContainerDied","Data":"d10929038456048d0742620d09ad12198fa061332340d13fe780561ae6f8528b"} Mar 13 10:45:14.553277 master-0 kubenswrapper[7271]: I0313 10:45:14.552984 7271 scope.go:117] "RemoveContainer" containerID="ff4d81f34f5a41e743d1b5d70a02b2768b8ec6e13c4ca20cdad80babd9b85b66" Mar 13 10:45:14.568726 master-0 kubenswrapper[7271]: I0313 10:45:14.568681 7271 scope.go:117] "RemoveContainer" containerID="ff4d81f34f5a41e743d1b5d70a02b2768b8ec6e13c4ca20cdad80babd9b85b66" Mar 13 10:45:14.569306 master-0 kubenswrapper[7271]: E0313 10:45:14.569235 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff4d81f34f5a41e743d1b5d70a02b2768b8ec6e13c4ca20cdad80babd9b85b66\": container with ID starting with ff4d81f34f5a41e743d1b5d70a02b2768b8ec6e13c4ca20cdad80babd9b85b66 not found: ID does not exist" containerID="ff4d81f34f5a41e743d1b5d70a02b2768b8ec6e13c4ca20cdad80babd9b85b66" Mar 13 10:45:14.569306 master-0 kubenswrapper[7271]: I0313 10:45:14.569279 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff4d81f34f5a41e743d1b5d70a02b2768b8ec6e13c4ca20cdad80babd9b85b66"} err="failed to get container status \"ff4d81f34f5a41e743d1b5d70a02b2768b8ec6e13c4ca20cdad80babd9b85b66\": rpc error: code = NotFound desc = could not find container \"ff4d81f34f5a41e743d1b5d70a02b2768b8ec6e13c4ca20cdad80babd9b85b66\": container with ID starting with ff4d81f34f5a41e743d1b5d70a02b2768b8ec6e13c4ca20cdad80babd9b85b66 not found: ID does not exist" Mar 13 10:45:14.630545 master-0 kubenswrapper[7271]: I0313 10:45:14.630475 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b3bcb671-5236-49fb-8540-131f18b91fc3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:45:14.630545 master-0 kubenswrapper[7271]: I0313 10:45:14.630507 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3bcb671-5236-49fb-8540-131f18b91fc3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:45:14.630545 master-0 kubenswrapper[7271]: I0313 10:45:14.630520 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3bcb671-5236-49fb-8540-131f18b91fc3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:45:14.788993 master-0 kubenswrapper[7271]: I0313 10:45:14.788969 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 10:45:14.796232 master-0 kubenswrapper[7271]: E0313 10:45:14.796079 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:45:04Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:45:04Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:45:04Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:45:04Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4855408bd0e4d0711383d0c14dcad53c98255ff9f83f6cbefb57e47eacc1f1f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:97bdbb5854e4ad7976209a44cff02c8a2b9542f58ad007c06a5c3a5e8266def1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284762325},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8c978bb5c329452b181f61f00452b4c2bfd83d245db56050bc7607972a791a76\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:e6567accc084db971e077b5ca666357e3a326fa27f69fc7135a5bc2e19f998eb\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221745369},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:45:14.832565 master-0 kubenswrapper[7271]: I0313 10:45:14.832477 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-kubelet-dir\") pod \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\" (UID: \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\") " Mar 13 10:45:14.832833 master-0 kubenswrapper[7271]: I0313 10:45:14.832816 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-var-lock\") pod \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\" (UID: \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\") " Mar 13 10:45:14.832938 master-0 kubenswrapper[7271]: I0313 10:45:14.832925 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-kube-api-access\") pod \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\" (UID: \"1769d48d-7ef0-48ee-9b7d-b46151ae5df6\") " Mar 13 10:45:14.833036 master-0 kubenswrapper[7271]: I0313 10:45:14.832633 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1769d48d-7ef0-48ee-9b7d-b46151ae5df6" (UID: "1769d48d-7ef0-48ee-9b7d-b46151ae5df6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:45:14.833098 master-0 kubenswrapper[7271]: I0313 10:45:14.832877 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-var-lock" (OuterVolumeSpecName: "var-lock") pod "1769d48d-7ef0-48ee-9b7d-b46151ae5df6" (UID: "1769d48d-7ef0-48ee-9b7d-b46151ae5df6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:45:14.833328 master-0 kubenswrapper[7271]: I0313 10:45:14.833313 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:45:14.833403 master-0 kubenswrapper[7271]: I0313 10:45:14.833390 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:45:14.835722 master-0 kubenswrapper[7271]: I0313 10:45:14.835678 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1769d48d-7ef0-48ee-9b7d-b46151ae5df6" (UID: "1769d48d-7ef0-48ee-9b7d-b46151ae5df6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:45:14.934776 master-0 kubenswrapper[7271]: I0313 10:45:14.934703 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1769d48d-7ef0-48ee-9b7d-b46151ae5df6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:45:15.532887 master-0 kubenswrapper[7271]: E0313 10:45:15.532816 7271 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod164610fc_3942_4e85_9f80_a335c9efcc2f.slice/crio-a303a7aee439985c205f48471f9e246f6eeaa905a4431f515631510faa7d2fc6\": RecentStats: unable to find data in memory cache]" Mar 13 10:45:15.563915 master-0 kubenswrapper[7271]: I0313 10:45:15.563759 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"1769d48d-7ef0-48ee-9b7d-b46151ae5df6","Type":"ContainerDied","Data":"0edd269bc9bfd58457b3b88ab218fa96e34778af571fb8288c4d56256e1a1e4d"} Mar 13 10:45:15.563915 master-0 kubenswrapper[7271]: I0313 10:45:15.563818 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0edd269bc9bfd58457b3b88ab218fa96e34778af571fb8288c4d56256e1a1e4d" Mar 13 10:45:15.564657 master-0 kubenswrapper[7271]: I0313 10:45:15.563956 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 10:45:17.216419 master-0 kubenswrapper[7271]: I0313 10:45:17.216304 7271 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:45:17.217941 master-0 kubenswrapper[7271]: I0313 10:45:17.217212 7271 scope.go:117] "RemoveContainer" containerID="6b8cee904e554093314cde9eee6c22eacab9d7c222d3e342258752f9f92f0479" Mar 13 10:45:17.217941 master-0 kubenswrapper[7271]: E0313 10:45:17.217532 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:45:17.283342 master-0 kubenswrapper[7271]: E0313 10:45:17.283183 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:45:17.468713 master-0 kubenswrapper[7271]: I0313 10:45:17.468522 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:45:17.583360 master-0 kubenswrapper[7271]: I0313 10:45:17.583276 7271 scope.go:117] "RemoveContainer" containerID="6b8cee904e554093314cde9eee6c22eacab9d7c222d3e342258752f9f92f0479" Mar 13 10:45:17.583809 master-0 kubenswrapper[7271]: E0313 10:45:17.583754 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:45:20.385477 master-0 kubenswrapper[7271]: I0313 10:45:20.385423 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:45:24.797098 master-0 kubenswrapper[7271]: E0313 10:45:24.797017 7271 request.go:1255] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Mar 13 10:45:24.797850 master-0 kubenswrapper[7271]: E0313 10:45:24.797119 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Mar 13 10:45:25.685930 master-0 kubenswrapper[7271]: E0313 10:45:25.685856 7271 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod164610fc_3942_4e85_9f80_a335c9efcc2f.slice/crio-a303a7aee439985c205f48471f9e246f6eeaa905a4431f515631510faa7d2fc6\": RecentStats: unable to find data in memory cache]" Mar 13 10:45:27.283849 master-0 kubenswrapper[7271]: E0313 10:45:27.283745 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:45:29.134140 master-0 kubenswrapper[7271]: E0313 10:45:29.134077 7271 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod164610fc_3942_4e85_9f80_a335c9efcc2f.slice/crio-a303a7aee439985c205f48471f9e246f6eeaa905a4431f515631510faa7d2fc6\": RecentStats: unable to find data in memory cache]" Mar 13 10:45:29.627817 master-0 kubenswrapper[7271]: I0313 10:45:29.627712 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 10:45:29.630146 master-0 kubenswrapper[7271]: I0313 10:45:29.630088 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 10:45:29.631147 master-0 kubenswrapper[7271]: I0313 10:45:29.631108 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 13 10:45:29.631700 master-0 kubenswrapper[7271]: I0313 10:45:29.631674 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 13 10:45:29.633389 master-0 kubenswrapper[7271]: I0313 10:45:29.633292 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 10:45:29.672399 master-0 kubenswrapper[7271]: I0313 10:45:29.672353 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 10:45:29.674079 master-0 kubenswrapper[7271]: I0313 10:45:29.674039 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 10:45:29.674886 master-0 kubenswrapper[7271]: I0313 10:45:29.674870 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 13 10:45:29.675548 master-0 kubenswrapper[7271]: I0313 10:45:29.675514 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 13 10:45:29.677101 master-0 kubenswrapper[7271]: I0313 10:45:29.677054 7271 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b" exitCode=137 Mar 13 10:45:29.677224 master-0 kubenswrapper[7271]: I0313 10:45:29.677198 7271 scope.go:117] "RemoveContainer" containerID="3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb" Mar 13 10:45:29.677315 master-0 kubenswrapper[7271]: I0313 10:45:29.677299 7271 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466" exitCode=137 Mar 13 10:45:29.677456 master-0 kubenswrapper[7271]: I0313 10:45:29.677204 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 10:45:29.695772 master-0 kubenswrapper[7271]: I0313 10:45:29.695753 7271 scope.go:117] "RemoveContainer" containerID="cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597" Mar 13 10:45:29.715816 master-0 kubenswrapper[7271]: I0313 10:45:29.715763 7271 scope.go:117] "RemoveContainer" containerID="a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9" Mar 13 10:45:29.736907 master-0 kubenswrapper[7271]: I0313 10:45:29.736863 7271 scope.go:117] "RemoveContainer" containerID="ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b" Mar 13 10:45:29.757477 master-0 kubenswrapper[7271]: I0313 10:45:29.757422 7271 scope.go:117] "RemoveContainer" containerID="1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466" Mar 13 10:45:29.773534 master-0 kubenswrapper[7271]: I0313 10:45:29.773491 7271 scope.go:117] "RemoveContainer" containerID="55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448" Mar 13 10:45:29.774753 master-0 kubenswrapper[7271]: I0313 10:45:29.774712 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 10:45:29.774871 master-0 kubenswrapper[7271]: I0313 10:45:29.774767 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 10:45:29.774871 master-0 kubenswrapper[7271]: I0313 10:45:29.774828 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 10:45:29.774871 master-0 kubenswrapper[7271]: I0313 10:45:29.774847 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:45:29.775064 master-0 kubenswrapper[7271]: I0313 10:45:29.774871 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir" (OuterVolumeSpecName: "data-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:45:29.775064 master-0 kubenswrapper[7271]: I0313 10:45:29.774919 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 10:45:29.775064 master-0 kubenswrapper[7271]: I0313 10:45:29.774949 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 10:45:29.775064 master-0 kubenswrapper[7271]: I0313 10:45:29.774980 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:45:29.775064 master-0 kubenswrapper[7271]: I0313 10:45:29.774995 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:45:29.775064 master-0 kubenswrapper[7271]: I0313 10:45:29.775024 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 10:45:29.775451 master-0 kubenswrapper[7271]: I0313 10:45:29.775073 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:45:29.775451 master-0 kubenswrapper[7271]: I0313 10:45:29.775393 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir" (OuterVolumeSpecName: "log-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:45:29.776976 master-0 kubenswrapper[7271]: I0313 10:45:29.775708 7271 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:45:29.777140 master-0 kubenswrapper[7271]: I0313 10:45:29.777118 7271 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:45:29.777263 master-0 kubenswrapper[7271]: I0313 10:45:29.777244 7271 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:45:29.777386 master-0 kubenswrapper[7271]: I0313 10:45:29.777367 7271 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:45:29.777498 master-0 kubenswrapper[7271]: I0313 10:45:29.777479 7271 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 13 10:45:29.794379 master-0 kubenswrapper[7271]: I0313 10:45:29.794335 7271 scope.go:117] "RemoveContainer" containerID="961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e" Mar 13 10:45:29.810549 master-0 kubenswrapper[7271]: I0313 10:45:29.810519 7271 scope.go:117] "RemoveContainer" containerID="18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b" Mar 13 10:45:29.824032 master-0 kubenswrapper[7271]: I0313 10:45:29.823984 7271 scope.go:117] "RemoveContainer" containerID="3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb" Mar 13 10:45:29.824538 master-0 kubenswrapper[7271]: E0313 10:45:29.824501 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb\": container with ID starting with 3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb not found: ID does not exist" containerID="3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb" Mar 13 10:45:29.824634 master-0 kubenswrapper[7271]: I0313 10:45:29.824547 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb"} err="failed to get container status \"3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb\": rpc error: code = NotFound desc = could not find container \"3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb\": container with ID starting with 3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb not found: ID does not exist" Mar 13 10:45:29.824634 master-0 kubenswrapper[7271]: I0313 10:45:29.824572 7271 scope.go:117] "RemoveContainer" containerID="cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597" Mar 13 10:45:29.825048 master-0 kubenswrapper[7271]: E0313 10:45:29.825017 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597\": container with ID starting with cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597 not found: ID does not exist" containerID="cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597" Mar 13 10:45:29.825130 master-0 kubenswrapper[7271]: I0313 10:45:29.825050 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597"} err="failed to get container status \"cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597\": rpc error: code = NotFound desc = could not find container \"cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597\": container with ID starting with cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597 not found: ID does not exist" Mar 13 10:45:29.825130 master-0 kubenswrapper[7271]: I0313 10:45:29.825074 7271 scope.go:117] "RemoveContainer" containerID="a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9" Mar 13 10:45:29.825283 master-0 kubenswrapper[7271]: E0313 10:45:29.825257 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9\": container with ID starting with a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9 not found: ID does not exist" containerID="a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9" Mar 13 10:45:29.825356 master-0 kubenswrapper[7271]: I0313 10:45:29.825281 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9"} err="failed to get container status \"a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9\": rpc error: code = NotFound desc = could not find container \"a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9\": container with ID starting with a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9 not found: ID does not exist" Mar 13 10:45:29.825356 master-0 kubenswrapper[7271]: I0313 10:45:29.825297 7271 scope.go:117] "RemoveContainer" containerID="ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b" Mar 13 10:45:29.825473 master-0 kubenswrapper[7271]: E0313 10:45:29.825450 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b\": container with ID starting with ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b not found: ID does not exist" containerID="ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b" Mar 13 10:45:29.825473 master-0 kubenswrapper[7271]: I0313 10:45:29.825470 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b"} err="failed to get container status \"ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b\": rpc error: code = NotFound desc = could not find container \"ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b\": container with ID starting with ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b not found: ID does not exist" Mar 13 10:45:29.825632 master-0 kubenswrapper[7271]: I0313 10:45:29.825482 7271 scope.go:117] "RemoveContainer" containerID="1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466" Mar 13 10:45:29.825699 master-0 kubenswrapper[7271]: E0313 10:45:29.825650 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466\": container with ID starting with 1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466 not found: ID does not exist" containerID="1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466" Mar 13 10:45:29.825699 master-0 kubenswrapper[7271]: I0313 10:45:29.825672 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466"} err="failed to get container status \"1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466\": rpc error: code = NotFound desc = could not find container \"1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466\": container with ID starting with 1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466 not found: ID does not exist" Mar 13 10:45:29.825699 master-0 kubenswrapper[7271]: I0313 10:45:29.825689 7271 scope.go:117] "RemoveContainer" containerID="55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448" Mar 13 10:45:29.825869 master-0 kubenswrapper[7271]: E0313 10:45:29.825847 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448\": container with ID starting with 55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448 not found: ID does not exist" containerID="55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448" Mar 13 10:45:29.825943 master-0 kubenswrapper[7271]: I0313 10:45:29.825866 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448"} err="failed to get container status \"55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448\": rpc error: code = NotFound desc = could not find container \"55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448\": container with ID starting with 55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448 not found: ID does not exist" Mar 13 10:45:29.825943 master-0 kubenswrapper[7271]: I0313 10:45:29.825880 7271 scope.go:117] "RemoveContainer" containerID="961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e" Mar 13 10:45:29.826062 master-0 kubenswrapper[7271]: E0313 10:45:29.826030 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e\": container with ID starting with 961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e not found: ID does not exist" containerID="961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e" Mar 13 10:45:29.826062 master-0 kubenswrapper[7271]: I0313 10:45:29.826048 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e"} err="failed to get container status \"961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e\": rpc error: code = NotFound desc = could not find container \"961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e\": container with ID starting with 961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e not found: ID does not exist" Mar 13 10:45:29.826062 master-0 kubenswrapper[7271]: I0313 10:45:29.826062 7271 scope.go:117] "RemoveContainer" containerID="18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b" Mar 13 10:45:29.826228 master-0 kubenswrapper[7271]: E0313 10:45:29.826203 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b\": container with ID starting with 18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b not found: ID does not exist" containerID="18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b" Mar 13 10:45:29.826228 master-0 kubenswrapper[7271]: I0313 10:45:29.826218 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b"} err="failed to get container status \"18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b\": rpc error: code = NotFound desc = could not find container \"18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b\": container with ID starting with 18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b not found: ID does not exist" Mar 13 10:45:29.826329 master-0 kubenswrapper[7271]: I0313 10:45:29.826232 7271 scope.go:117] "RemoveContainer" containerID="3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb" Mar 13 10:45:29.826439 master-0 kubenswrapper[7271]: I0313 10:45:29.826415 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb"} err="failed to get container status \"3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb\": rpc error: code = NotFound desc = could not find container \"3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb\": container with ID starting with 3089e6f3b3e439b2957cab91455669dd4dff4bf3d4761c891273d29fbbf59bfb not found: ID does not exist" Mar 13 10:45:29.826439 master-0 kubenswrapper[7271]: I0313 10:45:29.826434 7271 scope.go:117] "RemoveContainer" containerID="cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597" Mar 13 10:45:29.826627 master-0 kubenswrapper[7271]: I0313 10:45:29.826603 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597"} err="failed to get container status \"cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597\": rpc error: code = NotFound desc = could not find container \"cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597\": container with ID starting with cb824d0c26adb350fa0b0bacb2055bcb55ddd73c1101b2629d6317dc7e1e6597 not found: ID does not exist" Mar 13 10:45:29.826627 master-0 kubenswrapper[7271]: I0313 10:45:29.826620 7271 scope.go:117] "RemoveContainer" containerID="a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9" Mar 13 10:45:29.826867 master-0 kubenswrapper[7271]: I0313 10:45:29.826825 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9"} err="failed to get container status \"a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9\": rpc error: code = NotFound desc = could not find container \"a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9\": container with ID starting with a889b46874f42c94cbba783b525dab50e6891c8b1060d253d55e797728a246b9 not found: ID does not exist" Mar 13 10:45:29.826867 master-0 kubenswrapper[7271]: I0313 10:45:29.826843 7271 scope.go:117] "RemoveContainer" containerID="ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b" Mar 13 10:45:29.827425 master-0 kubenswrapper[7271]: I0313 10:45:29.827402 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b"} err="failed to get container status \"ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b\": rpc error: code = NotFound desc = could not find container \"ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b\": container with ID starting with ddb1955ddf07a0481cc9523f4c5c4740130a031fc19198319eea449bed89b85b not found: ID does not exist" Mar 13 10:45:29.827425 master-0 kubenswrapper[7271]: I0313 10:45:29.827420 7271 scope.go:117] "RemoveContainer" containerID="1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466" Mar 13 10:45:29.827650 master-0 kubenswrapper[7271]: I0313 10:45:29.827620 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466"} err="failed to get container status \"1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466\": rpc error: code = NotFound desc = could not find container \"1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466\": container with ID starting with 1c386bfda83ec7decd7bf6c1450f7b33dc56d49c15e8dc8140908238c4f08466 not found: ID does not exist" Mar 13 10:45:29.827650 master-0 kubenswrapper[7271]: I0313 10:45:29.827643 7271 scope.go:117] "RemoveContainer" containerID="55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448" Mar 13 10:45:29.828107 master-0 kubenswrapper[7271]: I0313 10:45:29.828083 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448"} err="failed to get container status \"55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448\": rpc error: code = NotFound desc = could not find container \"55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448\": container with ID starting with 55c554c804b16e45a4e757c4edf4d5f0560727559d8c9bd2e924afcd9646b448 not found: ID does not exist" Mar 13 10:45:29.828107 master-0 kubenswrapper[7271]: I0313 10:45:29.828103 7271 scope.go:117] "RemoveContainer" containerID="961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e" Mar 13 10:45:29.828570 master-0 kubenswrapper[7271]: I0313 10:45:29.828505 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e"} err="failed to get container status \"961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e\": rpc error: code = NotFound desc = could not find container \"961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e\": container with ID starting with 961934a8e5627265375959d61e6040d0bb2309abb43ea8b1c0e9d2875295414e not found: ID does not exist" Mar 13 10:45:29.828758 master-0 kubenswrapper[7271]: I0313 10:45:29.828574 7271 scope.go:117] "RemoveContainer" containerID="18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b" Mar 13 10:45:29.829140 master-0 kubenswrapper[7271]: I0313 10:45:29.829104 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b"} err="failed to get container status \"18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b\": rpc error: code = NotFound desc = could not find container \"18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b\": container with ID starting with 18830bbc0afe717af2bed3c1064cb3c0ed999e957523c2a0f98dd6627ffe4c7b not found: ID does not exist" Mar 13 10:45:29.878507 master-0 kubenswrapper[7271]: I0313 10:45:29.878385 7271 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:45:30.646218 master-0 kubenswrapper[7271]: I0313 10:45:30.645805 7271 scope.go:117] "RemoveContainer" containerID="6b8cee904e554093314cde9eee6c22eacab9d7c222d3e342258752f9f92f0479" Mar 13 10:45:30.646218 master-0 kubenswrapper[7271]: E0313 10:45:30.646037 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:45:31.656839 master-0 kubenswrapper[7271]: I0313 10:45:31.656782 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" path="/var/lib/kubelet/pods/8e52bef89f4b50e4590a1719bcc5d7e5/volumes" Mar 13 10:45:33.068156 master-0 kubenswrapper[7271]: E0313 10:45:33.068015 7271 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189c60b88dcbec4d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:8e52bef89f4b50e4590a1719bcc5d7e5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:44:59.046333517 +0000 UTC m=+553.573156087,LastTimestamp:2026-03-13 10:44:59.046333517 +0000 UTC m=+553.573156087,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:45:34.797900 master-0 kubenswrapper[7271]: E0313 10:45:34.797786 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:45:35.866446 master-0 kubenswrapper[7271]: E0313 10:45:35.866320 7271 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod164610fc_3942_4e85_9f80_a335c9efcc2f.slice/crio-a303a7aee439985c205f48471f9e246f6eeaa905a4431f515631510faa7d2fc6\": RecentStats: unable to find data in memory cache]" Mar 13 10:45:37.285141 master-0 kubenswrapper[7271]: E0313 10:45:37.284940 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:45:37.727761 master-0 kubenswrapper[7271]: I0313 10:45:37.727704 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/2.log" Mar 13 10:45:37.728358 master-0 kubenswrapper[7271]: I0313 10:45:37.728319 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/1.log" Mar 13 10:45:37.729007 master-0 kubenswrapper[7271]: I0313 10:45:37.728974 7271 generic.go:334] "Generic (PLEG): container finished" podID="7667717b-fb74-456b-8615-16475cb69e98" containerID="aedbabca0ae1386209b376e594af5a1aca17689f565bb27119f58a8f09e1fc7c" exitCode=1 Mar 13 10:45:37.729079 master-0 kubenswrapper[7271]: I0313 10:45:37.729021 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerDied","Data":"aedbabca0ae1386209b376e594af5a1aca17689f565bb27119f58a8f09e1fc7c"} Mar 13 10:45:37.729079 master-0 kubenswrapper[7271]: I0313 10:45:37.729070 7271 scope.go:117] "RemoveContainer" containerID="246797499d890bbe0f0da9bedf22922185a5c85e0c93f20f83953bdd9898d644" Mar 13 10:45:37.729958 master-0 kubenswrapper[7271]: I0313 10:45:37.729924 7271 scope.go:117] "RemoveContainer" containerID="aedbabca0ae1386209b376e594af5a1aca17689f565bb27119f58a8f09e1fc7c" Mar 13 10:45:37.731691 master-0 kubenswrapper[7271]: E0313 10:45:37.731654 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:45:38.738321 master-0 kubenswrapper[7271]: I0313 10:45:38.738273 7271 generic.go:334] "Generic (PLEG): container finished" podID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerID="9aabceaa9098fa374fa3be7884e41fb57131871ca89880498f237e8d19971731" exitCode=0 Mar 13 10:45:38.738321 master-0 kubenswrapper[7271]: I0313 10:45:38.738339 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" event={"ID":"eb778c86-ea51-4eab-82b8-a8e0bec0f050","Type":"ContainerDied","Data":"9aabceaa9098fa374fa3be7884e41fb57131871ca89880498f237e8d19971731"} Mar 13 10:45:38.739101 master-0 kubenswrapper[7271]: I0313 10:45:38.738365 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" event={"ID":"eb778c86-ea51-4eab-82b8-a8e0bec0f050","Type":"ContainerStarted","Data":"355bba8a4cefe5a34bf9903f07fd7230c56e2657d48a952a7979a55c45edb0b5"} Mar 13 10:45:38.739101 master-0 kubenswrapper[7271]: I0313 10:45:38.738381 7271 scope.go:117] "RemoveContainer" containerID="c38e1852651e9aa29b7c4aa782bd48bf04b7ff3ecd204555f9421edc8fb3fef6" Mar 13 10:45:38.741388 master-0 kubenswrapper[7271]: I0313 10:45:38.741334 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/2.log" Mar 13 10:45:38.880904 master-0 kubenswrapper[7271]: I0313 10:45:38.880831 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:45:38.884714 master-0 kubenswrapper[7271]: I0313 10:45:38.884638 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:38.884714 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:38.884714 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:38.884714 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:38.884885 master-0 kubenswrapper[7271]: I0313 10:45:38.884755 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:39.883347 master-0 kubenswrapper[7271]: I0313 10:45:39.883282 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:39.883347 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:39.883347 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:39.883347 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:39.884013 master-0 kubenswrapper[7271]: I0313 10:45:39.883353 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:40.645067 master-0 kubenswrapper[7271]: I0313 10:45:40.644971 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 10:45:40.660459 master-0 kubenswrapper[7271]: I0313 10:45:40.660400 7271 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:45:40.660459 master-0 kubenswrapper[7271]: I0313 10:45:40.660442 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:45:40.883962 master-0 kubenswrapper[7271]: I0313 10:45:40.883847 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:40.883962 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:40.883962 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:40.883962 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:40.884715 master-0 kubenswrapper[7271]: I0313 10:45:40.883978 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:41.883573 master-0 kubenswrapper[7271]: I0313 10:45:41.883384 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:41.883573 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:41.883573 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:41.883573 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:41.883573 master-0 kubenswrapper[7271]: I0313 10:45:41.883517 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:42.880439 master-0 kubenswrapper[7271]: I0313 10:45:42.880343 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:45:42.882567 master-0 kubenswrapper[7271]: I0313 10:45:42.882505 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:42.882567 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:42.882567 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:42.882567 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:42.882567 master-0 kubenswrapper[7271]: I0313 10:45:42.882556 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:43.882968 master-0 kubenswrapper[7271]: I0313 10:45:43.882910 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:43.882968 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:43.882968 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:43.882968 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:43.883606 master-0 kubenswrapper[7271]: I0313 10:45:43.882970 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:44.259887 master-0 kubenswrapper[7271]: E0313 10:45:44.259692 7271 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod164610fc_3942_4e85_9f80_a335c9efcc2f.slice/crio-a303a7aee439985c205f48471f9e246f6eeaa905a4431f515631510faa7d2fc6\": RecentStats: unable to find data in memory cache]" Mar 13 10:45:44.798800 master-0 kubenswrapper[7271]: E0313 10:45:44.798717 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:45:44.883681 master-0 kubenswrapper[7271]: I0313 10:45:44.883558 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:44.883681 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:44.883681 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:44.883681 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:44.883681 master-0 kubenswrapper[7271]: I0313 10:45:44.883675 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:45.760201 master-0 kubenswrapper[7271]: I0313 10:45:45.759989 7271 scope.go:117] "RemoveContainer" containerID="6b8cee904e554093314cde9eee6c22eacab9d7c222d3e342258752f9f92f0479" Mar 13 10:45:45.882654 master-0 kubenswrapper[7271]: I0313 10:45:45.882563 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:45.882654 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:45.882654 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:45.882654 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:45.882977 master-0 kubenswrapper[7271]: I0313 10:45:45.882673 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:46.818395 master-0 kubenswrapper[7271]: I0313 10:45:46.818338 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"fe626963509dc538a60b23b46864b9cb9bb7ee0855c69d8f8a5c6f238417fdd4"} Mar 13 10:45:46.883468 master-0 kubenswrapper[7271]: I0313 10:45:46.883391 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:46.883468 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:46.883468 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:46.883468 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:46.883832 master-0 kubenswrapper[7271]: I0313 10:45:46.883482 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:47.286154 master-0 kubenswrapper[7271]: E0313 10:45:47.285937 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:45:47.468746 master-0 kubenswrapper[7271]: I0313 10:45:47.468670 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:45:47.884284 master-0 kubenswrapper[7271]: I0313 10:45:47.884161 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:47.884284 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:47.884284 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:47.884284 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:47.884843 master-0 kubenswrapper[7271]: I0313 10:45:47.884359 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:48.883129 master-0 kubenswrapper[7271]: I0313 10:45:48.883065 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:48.883129 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:48.883129 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:48.883129 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:48.883129 master-0 kubenswrapper[7271]: I0313 10:45:48.883121 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:49.646083 master-0 kubenswrapper[7271]: I0313 10:45:49.645972 7271 scope.go:117] "RemoveContainer" containerID="aedbabca0ae1386209b376e594af5a1aca17689f565bb27119f58a8f09e1fc7c" Mar 13 10:45:49.647071 master-0 kubenswrapper[7271]: E0313 10:45:49.646320 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:45:49.882670 master-0 kubenswrapper[7271]: I0313 10:45:49.882623 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:49.882670 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:49.882670 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:49.882670 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:49.882982 master-0 kubenswrapper[7271]: I0313 10:45:49.882679 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:50.468696 master-0 kubenswrapper[7271]: I0313 10:45:50.468577 7271 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:45:50.883156 master-0 kubenswrapper[7271]: I0313 10:45:50.883090 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:50.883156 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:50.883156 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:50.883156 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:50.884184 master-0 kubenswrapper[7271]: I0313 10:45:50.883164 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:51.884517 master-0 kubenswrapper[7271]: I0313 10:45:51.884438 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:51.884517 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:51.884517 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:51.884517 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:51.884517 master-0 kubenswrapper[7271]: I0313 10:45:51.884514 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:52.883418 master-0 kubenswrapper[7271]: I0313 10:45:52.883375 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:52.883418 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:52.883418 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:52.883418 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:52.883891 master-0 kubenswrapper[7271]: I0313 10:45:52.883862 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:53.796074 master-0 kubenswrapper[7271]: I0313 10:45:53.796008 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:45:53.883187 master-0 kubenswrapper[7271]: I0313 10:45:53.883115 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:53.883187 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:53.883187 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:53.883187 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:53.883501 master-0 kubenswrapper[7271]: I0313 10:45:53.883196 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:54.799474 master-0 kubenswrapper[7271]: E0313 10:45:54.799403 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:45:54.799474 master-0 kubenswrapper[7271]: E0313 10:45:54.799458 7271 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:45:54.882519 master-0 kubenswrapper[7271]: I0313 10:45:54.882459 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:54.882519 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:54.882519 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:54.882519 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:54.882519 master-0 kubenswrapper[7271]: I0313 10:45:54.882516 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:55.883699 master-0 kubenswrapper[7271]: I0313 10:45:55.883564 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:55.883699 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:55.883699 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:55.883699 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:55.884525 master-0 kubenswrapper[7271]: I0313 10:45:55.883717 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:55.889011 master-0 kubenswrapper[7271]: I0313 10:45:55.888977 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-9z8mk_f87662b9-6ac6-44f3-8a16-ff858c2baa91/approver/1.log" Mar 13 10:45:55.889528 master-0 kubenswrapper[7271]: I0313 10:45:55.889496 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-9z8mk_f87662b9-6ac6-44f3-8a16-ff858c2baa91/approver/0.log" Mar 13 10:45:55.889853 master-0 kubenswrapper[7271]: I0313 10:45:55.889817 7271 generic.go:334] "Generic (PLEG): container finished" podID="f87662b9-6ac6-44f3-8a16-ff858c2baa91" containerID="7e21ba1a4a052f4311590e81daf7c7043a43eea8119ade6c511b95ed35202221" exitCode=1 Mar 13 10:45:55.889914 master-0 kubenswrapper[7271]: I0313 10:45:55.889857 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-9z8mk" event={"ID":"f87662b9-6ac6-44f3-8a16-ff858c2baa91","Type":"ContainerDied","Data":"7e21ba1a4a052f4311590e81daf7c7043a43eea8119ade6c511b95ed35202221"} Mar 13 10:45:55.889914 master-0 kubenswrapper[7271]: I0313 10:45:55.889901 7271 scope.go:117] "RemoveContainer" containerID="d2e7a9c17281b6d5f7f20fbe7b128af98dc009aec3115a4cb2ebd1a39090d634" Mar 13 10:45:55.890642 master-0 kubenswrapper[7271]: I0313 10:45:55.890578 7271 scope.go:117] "RemoveContainer" containerID="7e21ba1a4a052f4311590e81daf7c7043a43eea8119ade6c511b95ed35202221" Mar 13 10:45:55.890954 master-0 kubenswrapper[7271]: E0313 10:45:55.890908 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-9z8mk_openshift-network-node-identity(f87662b9-6ac6-44f3-8a16-ff858c2baa91)\"" pod="openshift-network-node-identity/network-node-identity-9z8mk" podUID="f87662b9-6ac6-44f3-8a16-ff858c2baa91" Mar 13 10:45:56.883401 master-0 kubenswrapper[7271]: I0313 10:45:56.883343 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:56.883401 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:56.883401 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:56.883401 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:56.884541 master-0 kubenswrapper[7271]: I0313 10:45:56.883442 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:56.902889 master-0 kubenswrapper[7271]: I0313 10:45:56.902795 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-9z8mk_f87662b9-6ac6-44f3-8a16-ff858c2baa91/approver/1.log" Mar 13 10:45:57.287119 master-0 kubenswrapper[7271]: E0313 10:45:57.286969 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:45:57.287372 master-0 kubenswrapper[7271]: I0313 10:45:57.287351 7271 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 10:45:57.883317 master-0 kubenswrapper[7271]: I0313 10:45:57.883244 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:57.883317 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:57.883317 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:57.883317 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:57.883776 master-0 kubenswrapper[7271]: I0313 10:45:57.883340 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:58.883973 master-0 kubenswrapper[7271]: I0313 10:45:58.883904 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:58.883973 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:58.883973 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:58.883973 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:58.885173 master-0 kubenswrapper[7271]: I0313 10:45:58.884811 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:45:59.882992 master-0 kubenswrapper[7271]: I0313 10:45:59.882932 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:45:59.882992 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:45:59.882992 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:45:59.882992 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:45:59.883284 master-0 kubenswrapper[7271]: I0313 10:45:59.883008 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:00.469324 master-0 kubenswrapper[7271]: I0313 10:46:00.469234 7271 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:46:00.883003 master-0 kubenswrapper[7271]: I0313 10:46:00.882928 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:00.883003 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:00.883003 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:00.883003 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:00.883313 master-0 kubenswrapper[7271]: I0313 10:46:00.883048 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:01.883189 master-0 kubenswrapper[7271]: I0313 10:46:01.883107 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:01.883189 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:01.883189 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:01.883189 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:01.883819 master-0 kubenswrapper[7271]: I0313 10:46:01.883189 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:02.883425 master-0 kubenswrapper[7271]: I0313 10:46:02.883352 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:02.883425 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:02.883425 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:02.883425 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:02.884215 master-0 kubenswrapper[7271]: I0313 10:46:02.883440 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:03.646531 master-0 kubenswrapper[7271]: I0313 10:46:03.646467 7271 scope.go:117] "RemoveContainer" containerID="aedbabca0ae1386209b376e594af5a1aca17689f565bb27119f58a8f09e1fc7c" Mar 13 10:46:03.883504 master-0 kubenswrapper[7271]: I0313 10:46:03.883439 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:03.883504 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:03.883504 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:03.883504 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:03.884437 master-0 kubenswrapper[7271]: I0313 10:46:03.884400 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:03.956548 master-0 kubenswrapper[7271]: I0313 10:46:03.956428 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/2.log" Mar 13 10:46:03.957346 master-0 kubenswrapper[7271]: I0313 10:46:03.957319 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerStarted","Data":"ea6075437ddab13db72254693b1402fadb6322d2c4a635387e569d11ef32e573"} Mar 13 10:46:04.883138 master-0 kubenswrapper[7271]: I0313 10:46:04.883065 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:04.883138 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:04.883138 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:04.883138 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:04.883138 master-0 kubenswrapper[7271]: I0313 10:46:04.883147 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:05.104429 master-0 kubenswrapper[7271]: I0313 10:46:05.104353 7271 status_manager.go:851] "Failed to get status for pod" podUID="95339220-324d-45e7-bdc2-e4f42fbd1d32" pod="openshift-multus/multus-admission-controller-8d675b596-d787l" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods multus-admission-controller-8d675b596-d787l)" Mar 13 10:46:05.883081 master-0 kubenswrapper[7271]: I0313 10:46:05.883003 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:05.883081 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:05.883081 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:05.883081 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:05.883448 master-0 kubenswrapper[7271]: I0313 10:46:05.883085 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:06.883225 master-0 kubenswrapper[7271]: I0313 10:46:06.883021 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:06.883225 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:06.883225 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:06.883225 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:06.884472 master-0 kubenswrapper[7271]: I0313 10:46:06.883240 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:07.071324 master-0 kubenswrapper[7271]: E0313 10:46:07.071051 7271 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c60bbb1b96028 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:45:12.533999656 +0000 UTC m=+567.060822086,LastTimestamp:2026-03-13 10:45:12.533999656 +0000 UTC m=+567.060822086,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:46:07.288492 master-0 kubenswrapper[7271]: E0313 10:46:07.288216 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 13 10:46:07.884192 master-0 kubenswrapper[7271]: I0313 10:46:07.884105 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:07.884192 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:07.884192 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:07.884192 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:07.884985 master-0 kubenswrapper[7271]: I0313 10:46:07.884191 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:08.883336 master-0 kubenswrapper[7271]: I0313 10:46:08.883266 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:08.883336 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:08.883336 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:08.883336 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:08.883336 master-0 kubenswrapper[7271]: I0313 10:46:08.883335 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:09.883179 master-0 kubenswrapper[7271]: I0313 10:46:09.883038 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:09.883179 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:09.883179 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:09.883179 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:09.883179 master-0 kubenswrapper[7271]: I0313 10:46:09.883134 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:10.006971 master-0 kubenswrapper[7271]: I0313 10:46:10.006910 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c/installer/0.log" Mar 13 10:46:10.007201 master-0 kubenswrapper[7271]: I0313 10:46:10.006977 7271 generic.go:334] "Generic (PLEG): container finished" podID="e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c" containerID="3c84db0498138b2ad19628a630c45e3de3b287d4abdd1560f1b74b129ad3abaf" exitCode=1 Mar 13 10:46:10.007201 master-0 kubenswrapper[7271]: I0313 10:46:10.007019 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c","Type":"ContainerDied","Data":"3c84db0498138b2ad19628a630c45e3de3b287d4abdd1560f1b74b129ad3abaf"} Mar 13 10:46:10.469892 master-0 kubenswrapper[7271]: I0313 10:46:10.469201 7271 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:46:10.469892 master-0 kubenswrapper[7271]: I0313 10:46:10.469406 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:46:10.470603 master-0 kubenswrapper[7271]: I0313 10:46:10.470525 7271 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"fe626963509dc538a60b23b46864b9cb9bb7ee0855c69d8f8a5c6f238417fdd4"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 10:46:10.470714 master-0 kubenswrapper[7271]: I0313 10:46:10.470677 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://fe626963509dc538a60b23b46864b9cb9bb7ee0855c69d8f8a5c6f238417fdd4" gracePeriod=30 Mar 13 10:46:10.882836 master-0 kubenswrapper[7271]: I0313 10:46:10.882760 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:10.882836 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:10.882836 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:10.882836 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:10.884042 master-0 kubenswrapper[7271]: I0313 10:46:10.883990 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:11.018125 master-0 kubenswrapper[7271]: I0313 10:46:11.018002 7271 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="fe626963509dc538a60b23b46864b9cb9bb7ee0855c69d8f8a5c6f238417fdd4" exitCode=2 Mar 13 10:46:11.018311 master-0 kubenswrapper[7271]: I0313 10:46:11.018235 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"fe626963509dc538a60b23b46864b9cb9bb7ee0855c69d8f8a5c6f238417fdd4"} Mar 13 10:46:11.018311 master-0 kubenswrapper[7271]: I0313 10:46:11.018270 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"532b7ad139ce93c1c35843b63b26445192b618a7ded3e4c717123c6f472fec2b"} Mar 13 10:46:11.018311 master-0 kubenswrapper[7271]: I0313 10:46:11.018291 7271 scope.go:117] "RemoveContainer" containerID="6b8cee904e554093314cde9eee6c22eacab9d7c222d3e342258752f9f92f0479" Mar 13 10:46:11.333944 master-0 kubenswrapper[7271]: I0313 10:46:11.333908 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c/installer/0.log" Mar 13 10:46:11.334138 master-0 kubenswrapper[7271]: I0313 10:46:11.333979 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 10:46:11.462612 master-0 kubenswrapper[7271]: I0313 10:46:11.462512 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-kube-api-access\") pod \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\" (UID: \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\") " Mar 13 10:46:11.462868 master-0 kubenswrapper[7271]: I0313 10:46:11.462670 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-kubelet-dir\") pod \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\" (UID: \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\") " Mar 13 10:46:11.462868 master-0 kubenswrapper[7271]: I0313 10:46:11.462798 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-var-lock\") pod \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\" (UID: \"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c\") " Mar 13 10:46:11.462868 master-0 kubenswrapper[7271]: I0313 10:46:11.462832 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c" (UID: "e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:46:11.462976 master-0 kubenswrapper[7271]: I0313 10:46:11.462962 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-var-lock" (OuterVolumeSpecName: "var-lock") pod "e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c" (UID: "e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:46:11.463160 master-0 kubenswrapper[7271]: I0313 10:46:11.463126 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:46:11.463160 master-0 kubenswrapper[7271]: I0313 10:46:11.463154 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:46:11.465765 master-0 kubenswrapper[7271]: I0313 10:46:11.465695 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c" (UID: "e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:46:11.564407 master-0 kubenswrapper[7271]: I0313 10:46:11.564281 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:46:11.646559 master-0 kubenswrapper[7271]: I0313 10:46:11.646499 7271 scope.go:117] "RemoveContainer" containerID="7e21ba1a4a052f4311590e81daf7c7043a43eea8119ade6c511b95ed35202221" Mar 13 10:46:11.883373 master-0 kubenswrapper[7271]: I0313 10:46:11.883165 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:11.883373 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:11.883373 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:11.883373 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:11.883373 master-0 kubenswrapper[7271]: I0313 10:46:11.883288 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:12.028756 master-0 kubenswrapper[7271]: I0313 10:46:12.028704 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c/installer/0.log" Mar 13 10:46:12.028989 master-0 kubenswrapper[7271]: I0313 10:46:12.028847 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 10:46:12.029620 master-0 kubenswrapper[7271]: I0313 10:46:12.029562 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c","Type":"ContainerDied","Data":"5b6e436b9cc09a918e66ed32313004ed8edc16d26f739da2414fd5c6347334d8"} Mar 13 10:46:12.029696 master-0 kubenswrapper[7271]: I0313 10:46:12.029626 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b6e436b9cc09a918e66ed32313004ed8edc16d26f739da2414fd5c6347334d8" Mar 13 10:46:12.031611 master-0 kubenswrapper[7271]: I0313 10:46:12.031563 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-9z8mk_f87662b9-6ac6-44f3-8a16-ff858c2baa91/approver/1.log" Mar 13 10:46:12.032146 master-0 kubenswrapper[7271]: I0313 10:46:12.032113 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-9z8mk" event={"ID":"f87662b9-6ac6-44f3-8a16-ff858c2baa91","Type":"ContainerStarted","Data":"d3ddcbb676d46e64d7ba3e7be74d0de883a020635f5cc54dd2e158997e4bc376"} Mar 13 10:46:12.883283 master-0 kubenswrapper[7271]: I0313 10:46:12.883192 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:12.883283 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:12.883283 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:12.883283 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:12.883932 master-0 kubenswrapper[7271]: I0313 10:46:12.883300 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:13.796446 master-0 kubenswrapper[7271]: I0313 10:46:13.796402 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:46:13.882789 master-0 kubenswrapper[7271]: I0313 10:46:13.882705 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:13.882789 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:13.882789 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:13.882789 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:13.882789 master-0 kubenswrapper[7271]: I0313 10:46:13.882782 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:14.047763 master-0 kubenswrapper[7271]: I0313 10:46:14.047616 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_a55a2a95-178c-4fcd-9866-3a149948d1d3/installer/0.log" Mar 13 10:46:14.047763 master-0 kubenswrapper[7271]: I0313 10:46:14.047669 7271 generic.go:334] "Generic (PLEG): container finished" podID="a55a2a95-178c-4fcd-9866-3a149948d1d3" containerID="1095e539909ae9e46360f463a967bbc617daeb2d47612ebdc2519683e6fd658c" exitCode=1 Mar 13 10:46:14.047763 master-0 kubenswrapper[7271]: I0313 10:46:14.047698 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"a55a2a95-178c-4fcd-9866-3a149948d1d3","Type":"ContainerDied","Data":"1095e539909ae9e46360f463a967bbc617daeb2d47612ebdc2519683e6fd658c"} Mar 13 10:46:14.662939 master-0 kubenswrapper[7271]: E0313 10:46:14.662861 7271 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 10:46:14.663769 master-0 kubenswrapper[7271]: I0313 10:46:14.663734 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 10:46:14.883896 master-0 kubenswrapper[7271]: I0313 10:46:14.883845 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:14.883896 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:14.883896 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:14.883896 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:14.884188 master-0 kubenswrapper[7271]: I0313 10:46:14.883909 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:14.941317 master-0 kubenswrapper[7271]: E0313 10:46:14.941088 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:46:04Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:46:04Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:46:04Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:46:04Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4855408bd0e4d0711383d0c14dcad53c98255ff9f83f6cbefb57e47eacc1f1f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:97bdbb5854e4ad7976209a44cff02c8a2b9542f58ad007c06a5c3a5e8266def1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284762325},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8c978bb5c329452b181f61f00452b4c2bfd83d245db56050bc7607972a791a76\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:e6567accc084db971e077b5ca666357e3a326fa27f69fc7135a5bc2e19f998eb\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221745369},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:46:15.055565 master-0 kubenswrapper[7271]: I0313 10:46:15.055495 7271 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="2c12c83d53e7d737ce9ecb47ead0648457311377deed39823b1a6e3ee6b6647d" exitCode=0 Mar 13 10:46:15.056466 master-0 kubenswrapper[7271]: I0313 10:46:15.055734 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"2c12c83d53e7d737ce9ecb47ead0648457311377deed39823b1a6e3ee6b6647d"} Mar 13 10:46:15.056466 master-0 kubenswrapper[7271]: I0313 10:46:15.055768 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"c7d9738c3adc0c979eef42141f9dc2b629b15190348d5c5364a237fdd93a9dff"} Mar 13 10:46:15.056466 master-0 kubenswrapper[7271]: I0313 10:46:15.056008 7271 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:46:15.056466 master-0 kubenswrapper[7271]: I0313 10:46:15.056023 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:46:15.354793 master-0 kubenswrapper[7271]: I0313 10:46:15.354725 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_a55a2a95-178c-4fcd-9866-3a149948d1d3/installer/0.log" Mar 13 10:46:15.355118 master-0 kubenswrapper[7271]: I0313 10:46:15.354817 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 10:46:15.425915 master-0 kubenswrapper[7271]: I0313 10:46:15.425808 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a55a2a95-178c-4fcd-9866-3a149948d1d3-kubelet-dir\") pod \"a55a2a95-178c-4fcd-9866-3a149948d1d3\" (UID: \"a55a2a95-178c-4fcd-9866-3a149948d1d3\") " Mar 13 10:46:15.425915 master-0 kubenswrapper[7271]: I0313 10:46:15.425887 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a55a2a95-178c-4fcd-9866-3a149948d1d3-kube-api-access\") pod \"a55a2a95-178c-4fcd-9866-3a149948d1d3\" (UID: \"a55a2a95-178c-4fcd-9866-3a149948d1d3\") " Mar 13 10:46:15.425915 master-0 kubenswrapper[7271]: I0313 10:46:15.425923 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a55a2a95-178c-4fcd-9866-3a149948d1d3-var-lock\") pod \"a55a2a95-178c-4fcd-9866-3a149948d1d3\" (UID: \"a55a2a95-178c-4fcd-9866-3a149948d1d3\") " Mar 13 10:46:15.426431 master-0 kubenswrapper[7271]: I0313 10:46:15.425958 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a55a2a95-178c-4fcd-9866-3a149948d1d3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a55a2a95-178c-4fcd-9866-3a149948d1d3" (UID: "a55a2a95-178c-4fcd-9866-3a149948d1d3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:46:15.426431 master-0 kubenswrapper[7271]: I0313 10:46:15.426053 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a55a2a95-178c-4fcd-9866-3a149948d1d3-var-lock" (OuterVolumeSpecName: "var-lock") pod "a55a2a95-178c-4fcd-9866-3a149948d1d3" (UID: "a55a2a95-178c-4fcd-9866-3a149948d1d3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:46:15.426561 master-0 kubenswrapper[7271]: I0313 10:46:15.426491 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a55a2a95-178c-4fcd-9866-3a149948d1d3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:46:15.426561 master-0 kubenswrapper[7271]: I0313 10:46:15.426524 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a55a2a95-178c-4fcd-9866-3a149948d1d3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:46:15.429209 master-0 kubenswrapper[7271]: I0313 10:46:15.429161 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55a2a95-178c-4fcd-9866-3a149948d1d3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a55a2a95-178c-4fcd-9866-3a149948d1d3" (UID: "a55a2a95-178c-4fcd-9866-3a149948d1d3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:46:15.527580 master-0 kubenswrapper[7271]: I0313 10:46:15.527486 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a55a2a95-178c-4fcd-9866-3a149948d1d3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:46:15.883416 master-0 kubenswrapper[7271]: I0313 10:46:15.883167 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:15.883416 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:15.883416 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:15.883416 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:15.883416 master-0 kubenswrapper[7271]: I0313 10:46:15.883312 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:16.062334 master-0 kubenswrapper[7271]: I0313 10:46:16.062280 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_a55a2a95-178c-4fcd-9866-3a149948d1d3/installer/0.log" Mar 13 10:46:16.063352 master-0 kubenswrapper[7271]: I0313 10:46:16.062348 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"a55a2a95-178c-4fcd-9866-3a149948d1d3","Type":"ContainerDied","Data":"07760884fb73f623d10ed12cbe3f37005e2db59b258a61a52af5d3fc8c6b9063"} Mar 13 10:46:16.063352 master-0 kubenswrapper[7271]: I0313 10:46:16.062383 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07760884fb73f623d10ed12cbe3f37005e2db59b258a61a52af5d3fc8c6b9063" Mar 13 10:46:16.063352 master-0 kubenswrapper[7271]: I0313 10:46:16.062394 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 10:46:16.883413 master-0 kubenswrapper[7271]: I0313 10:46:16.883299 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:16.883413 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:16.883413 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:16.883413 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:16.884026 master-0 kubenswrapper[7271]: I0313 10:46:16.883419 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:17.468453 master-0 kubenswrapper[7271]: I0313 10:46:17.468382 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:46:17.490699 master-0 kubenswrapper[7271]: E0313 10:46:17.490537 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 13 10:46:17.882879 master-0 kubenswrapper[7271]: I0313 10:46:17.882808 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:17.882879 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:17.882879 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:17.882879 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:17.883264 master-0 kubenswrapper[7271]: I0313 10:46:17.882904 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:18.883144 master-0 kubenswrapper[7271]: I0313 10:46:18.883067 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:18.883144 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:18.883144 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:18.883144 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:18.883144 master-0 kubenswrapper[7271]: I0313 10:46:18.883133 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:19.882899 master-0 kubenswrapper[7271]: I0313 10:46:19.882832 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:19.882899 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:19.882899 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:19.882899 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:19.883705 master-0 kubenswrapper[7271]: I0313 10:46:19.882905 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:20.468888 master-0 kubenswrapper[7271]: I0313 10:46:20.468808 7271 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:46:20.882687 master-0 kubenswrapper[7271]: I0313 10:46:20.882636 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:20.882687 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:20.882687 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:20.882687 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:20.882974 master-0 kubenswrapper[7271]: I0313 10:46:20.882702 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:21.882974 master-0 kubenswrapper[7271]: I0313 10:46:21.882893 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:21.882974 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:21.882974 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:21.882974 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:21.883596 master-0 kubenswrapper[7271]: I0313 10:46:21.882989 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:22.883755 master-0 kubenswrapper[7271]: I0313 10:46:22.883692 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:22.883755 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:22.883755 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:22.883755 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:22.884991 master-0 kubenswrapper[7271]: I0313 10:46:22.884893 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:23.883814 master-0 kubenswrapper[7271]: I0313 10:46:23.883749 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:23.883814 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:23.883814 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:23.883814 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:23.883814 master-0 kubenswrapper[7271]: I0313 10:46:23.883825 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:24.883136 master-0 kubenswrapper[7271]: I0313 10:46:24.883094 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:24.883136 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:24.883136 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:24.883136 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:24.883475 master-0 kubenswrapper[7271]: I0313 10:46:24.883450 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:24.941978 master-0 kubenswrapper[7271]: E0313 10:46:24.941917 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:46:25.882887 master-0 kubenswrapper[7271]: I0313 10:46:25.882820 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:25.882887 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:25.882887 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:25.882887 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:25.883280 master-0 kubenswrapper[7271]: I0313 10:46:25.882899 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:26.883255 master-0 kubenswrapper[7271]: I0313 10:46:26.883173 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:26.883255 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:26.883255 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:26.883255 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:26.883943 master-0 kubenswrapper[7271]: I0313 10:46:26.883261 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:27.884020 master-0 kubenswrapper[7271]: I0313 10:46:27.883912 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:27.884020 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:27.884020 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:27.884020 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:27.884020 master-0 kubenswrapper[7271]: I0313 10:46:27.884010 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:27.892250 master-0 kubenswrapper[7271]: E0313 10:46:27.892166 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 13 10:46:28.883488 master-0 kubenswrapper[7271]: I0313 10:46:28.883369 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:28.883488 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:28.883488 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:28.883488 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:28.883961 master-0 kubenswrapper[7271]: I0313 10:46:28.883484 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:29.883472 master-0 kubenswrapper[7271]: I0313 10:46:29.883382 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:29.883472 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:29.883472 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:29.883472 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:29.884116 master-0 kubenswrapper[7271]: I0313 10:46:29.883493 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:30.469240 master-0 kubenswrapper[7271]: I0313 10:46:30.469040 7271 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:46:30.882701 master-0 kubenswrapper[7271]: I0313 10:46:30.882619 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:30.882701 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:30.882701 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:30.882701 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:30.882701 master-0 kubenswrapper[7271]: I0313 10:46:30.882695 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:31.883698 master-0 kubenswrapper[7271]: I0313 10:46:31.883617 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:31.883698 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:31.883698 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:31.883698 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:31.884369 master-0 kubenswrapper[7271]: I0313 10:46:31.883719 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:32.885813 master-0 kubenswrapper[7271]: I0313 10:46:32.885689 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:32.885813 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:32.885813 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:32.885813 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:32.885813 master-0 kubenswrapper[7271]: I0313 10:46:32.885809 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:33.884357 master-0 kubenswrapper[7271]: I0313 10:46:33.884271 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:33.884357 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:33.884357 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:33.884357 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:33.885023 master-0 kubenswrapper[7271]: I0313 10:46:33.884368 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:34.883856 master-0 kubenswrapper[7271]: I0313 10:46:34.883755 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:34.883856 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:34.883856 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:34.883856 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:34.885287 master-0 kubenswrapper[7271]: I0313 10:46:34.883891 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:34.942780 master-0 kubenswrapper[7271]: E0313 10:46:34.942690 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:46:35.883164 master-0 kubenswrapper[7271]: I0313 10:46:35.883100 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:35.883164 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:35.883164 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:35.883164 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:35.883164 master-0 kubenswrapper[7271]: I0313 10:46:35.883170 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:36.884161 master-0 kubenswrapper[7271]: I0313 10:46:36.884068 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:36.884161 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:36.884161 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:36.884161 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:36.885187 master-0 kubenswrapper[7271]: I0313 10:46:36.884188 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:37.884160 master-0 kubenswrapper[7271]: I0313 10:46:37.884008 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:37.884160 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:37.884160 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:37.884160 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:37.884160 master-0 kubenswrapper[7271]: I0313 10:46:37.884132 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:38.693876 master-0 kubenswrapper[7271]: E0313 10:46:38.693679 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 13 10:46:38.883199 master-0 kubenswrapper[7271]: I0313 10:46:38.883084 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:38.883199 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:38.883199 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:38.883199 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:38.883609 master-0 kubenswrapper[7271]: I0313 10:46:38.883231 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:39.884100 master-0 kubenswrapper[7271]: I0313 10:46:39.883990 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:39.884100 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:39.884100 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:39.884100 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:39.885443 master-0 kubenswrapper[7271]: I0313 10:46:39.884111 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:40.469575 master-0 kubenswrapper[7271]: I0313 10:46:40.469428 7271 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:46:40.469575 master-0 kubenswrapper[7271]: I0313 10:46:40.469631 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:46:40.470738 master-0 kubenswrapper[7271]: I0313 10:46:40.470690 7271 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"532b7ad139ce93c1c35843b63b26445192b618a7ded3e4c717123c6f472fec2b"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 10:46:40.470815 master-0 kubenswrapper[7271]: I0313 10:46:40.470771 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://532b7ad139ce93c1c35843b63b26445192b618a7ded3e4c717123c6f472fec2b" gracePeriod=30 Mar 13 10:46:40.605659 master-0 kubenswrapper[7271]: E0313 10:46:40.605570 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:46:40.883556 master-0 kubenswrapper[7271]: I0313 10:46:40.883493 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:40.883556 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:40.883556 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:40.883556 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:40.883923 master-0 kubenswrapper[7271]: I0313 10:46:40.883581 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:41.075631 master-0 kubenswrapper[7271]: E0313 10:46:41.075411 7271 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c60bbb1b96028 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:45:12.533999656 +0000 UTC m=+567.060822086,LastTimestamp:2026-03-13 10:45:13.797691969 +0000 UTC m=+568.324514399,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:46:41.263612 master-0 kubenswrapper[7271]: I0313 10:46:41.263418 7271 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="532b7ad139ce93c1c35843b63b26445192b618a7ded3e4c717123c6f472fec2b" exitCode=2 Mar 13 10:46:41.263612 master-0 kubenswrapper[7271]: I0313 10:46:41.263475 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"532b7ad139ce93c1c35843b63b26445192b618a7ded3e4c717123c6f472fec2b"} Mar 13 10:46:41.263612 master-0 kubenswrapper[7271]: I0313 10:46:41.263566 7271 scope.go:117] "RemoveContainer" containerID="fe626963509dc538a60b23b46864b9cb9bb7ee0855c69d8f8a5c6f238417fdd4" Mar 13 10:46:41.265242 master-0 kubenswrapper[7271]: I0313 10:46:41.265190 7271 scope.go:117] "RemoveContainer" containerID="532b7ad139ce93c1c35843b63b26445192b618a7ded3e4c717123c6f472fec2b" Mar 13 10:46:41.266341 master-0 kubenswrapper[7271]: E0313 10:46:41.266285 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:46:41.884151 master-0 kubenswrapper[7271]: I0313 10:46:41.884070 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:41.884151 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:41.884151 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:41.884151 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:41.884944 master-0 kubenswrapper[7271]: I0313 10:46:41.884817 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:42.884082 master-0 kubenswrapper[7271]: I0313 10:46:42.883986 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:42.884082 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:42.884082 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:42.884082 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:42.884994 master-0 kubenswrapper[7271]: I0313 10:46:42.884124 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:43.883080 master-0 kubenswrapper[7271]: I0313 10:46:43.882987 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:43.883080 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:43.883080 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:43.883080 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:43.883706 master-0 kubenswrapper[7271]: I0313 10:46:43.883091 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:44.882860 master-0 kubenswrapper[7271]: I0313 10:46:44.882758 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:44.882860 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:44.882860 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:44.882860 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:44.884443 master-0 kubenswrapper[7271]: I0313 10:46:44.882887 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:44.943715 master-0 kubenswrapper[7271]: E0313 10:46:44.943365 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:46:45.883458 master-0 kubenswrapper[7271]: I0313 10:46:45.883367 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:45.883458 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:45.883458 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:45.883458 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:45.884194 master-0 kubenswrapper[7271]: I0313 10:46:45.883471 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:46.883636 master-0 kubenswrapper[7271]: I0313 10:46:46.883567 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:46.883636 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:46.883636 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:46.883636 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:46.884353 master-0 kubenswrapper[7271]: I0313 10:46:46.884321 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:47.216739 master-0 kubenswrapper[7271]: I0313 10:46:47.216514 7271 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:46:47.217346 master-0 kubenswrapper[7271]: I0313 10:46:47.217289 7271 scope.go:117] "RemoveContainer" containerID="532b7ad139ce93c1c35843b63b26445192b618a7ded3e4c717123c6f472fec2b" Mar 13 10:46:47.217709 master-0 kubenswrapper[7271]: E0313 10:46:47.217568 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:46:47.883529 master-0 kubenswrapper[7271]: I0313 10:46:47.883435 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:47.883529 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:47.883529 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:47.883529 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:47.884148 master-0 kubenswrapper[7271]: I0313 10:46:47.883526 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:48.883094 master-0 kubenswrapper[7271]: I0313 10:46:48.883029 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:48.883094 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:48.883094 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:48.883094 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:48.883379 master-0 kubenswrapper[7271]: I0313 10:46:48.883115 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:49.060840 master-0 kubenswrapper[7271]: E0313 10:46:49.060762 7271 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 10:46:49.334533 master-0 kubenswrapper[7271]: I0313 10:46:49.334498 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x_bfbaa57e-adac-48f8-8182-b4fdb42fbb9c/cluster-cloud-controller-manager/0.log" Mar 13 10:46:49.334658 master-0 kubenswrapper[7271]: I0313 10:46:49.334547 7271 generic.go:334] "Generic (PLEG): container finished" podID="bfbaa57e-adac-48f8-8182-b4fdb42fbb9c" containerID="f5630038dc1bb4e46b0c3343da5e699daf5fd3e0af484ddecd21f624462048e4" exitCode=1 Mar 13 10:46:49.334905 master-0 kubenswrapper[7271]: I0313 10:46:49.334800 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" event={"ID":"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c","Type":"ContainerDied","Data":"f5630038dc1bb4e46b0c3343da5e699daf5fd3e0af484ddecd21f624462048e4"} Mar 13 10:46:49.336316 master-0 kubenswrapper[7271]: I0313 10:46:49.336272 7271 scope.go:117] "RemoveContainer" containerID="f5630038dc1bb4e46b0c3343da5e699daf5fd3e0af484ddecd21f624462048e4" Mar 13 10:46:49.884024 master-0 kubenswrapper[7271]: I0313 10:46:49.883856 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:49.884024 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:49.884024 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:49.884024 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:49.884782 master-0 kubenswrapper[7271]: I0313 10:46:49.884040 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:50.295643 master-0 kubenswrapper[7271]: E0313 10:46:50.295460 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 13 10:46:50.347447 master-0 kubenswrapper[7271]: I0313 10:46:50.347399 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x_bfbaa57e-adac-48f8-8182-b4fdb42fbb9c/cluster-cloud-controller-manager/0.log" Mar 13 10:46:50.347707 master-0 kubenswrapper[7271]: I0313 10:46:50.347556 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" event={"ID":"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c","Type":"ContainerStarted","Data":"e4922373f878f811ba5ce34e6afd0442b52caab0dc4fd0fadca7b473cd7179e0"} Mar 13 10:46:50.351723 master-0 kubenswrapper[7271]: I0313 10:46:50.351681 7271 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="504900721f39956a914d16037b49b7d46bf9d8673745bf5af0d69241e9d13d4d" exitCode=0 Mar 13 10:46:50.351909 master-0 kubenswrapper[7271]: I0313 10:46:50.351727 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"504900721f39956a914d16037b49b7d46bf9d8673745bf5af0d69241e9d13d4d"} Mar 13 10:46:50.352043 master-0 kubenswrapper[7271]: I0313 10:46:50.352034 7271 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:46:50.352129 master-0 kubenswrapper[7271]: I0313 10:46:50.352047 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:46:50.883872 master-0 kubenswrapper[7271]: I0313 10:46:50.883726 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:50.883872 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:50.883872 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:50.883872 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:50.883872 master-0 kubenswrapper[7271]: I0313 10:46:50.883838 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:51.368847 master-0 kubenswrapper[7271]: I0313 10:46:51.368796 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x_bfbaa57e-adac-48f8-8182-b4fdb42fbb9c/config-sync-controllers/0.log" Mar 13 10:46:51.370066 master-0 kubenswrapper[7271]: I0313 10:46:51.369994 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x_bfbaa57e-adac-48f8-8182-b4fdb42fbb9c/cluster-cloud-controller-manager/0.log" Mar 13 10:46:51.370211 master-0 kubenswrapper[7271]: I0313 10:46:51.370126 7271 generic.go:334] "Generic (PLEG): container finished" podID="bfbaa57e-adac-48f8-8182-b4fdb42fbb9c" containerID="9c62b3c2fdc62403c70efa03c341af1e11c584005c0854a7b9ae04a0957b3988" exitCode=1 Mar 13 10:46:51.370211 master-0 kubenswrapper[7271]: I0313 10:46:51.370195 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" event={"ID":"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c","Type":"ContainerDied","Data":"9c62b3c2fdc62403c70efa03c341af1e11c584005c0854a7b9ae04a0957b3988"} Mar 13 10:46:51.371305 master-0 kubenswrapper[7271]: I0313 10:46:51.371250 7271 scope.go:117] "RemoveContainer" containerID="9c62b3c2fdc62403c70efa03c341af1e11c584005c0854a7b9ae04a0957b3988" Mar 13 10:46:51.884760 master-0 kubenswrapper[7271]: I0313 10:46:51.884460 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:51.884760 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:51.884760 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:51.884760 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:51.884760 master-0 kubenswrapper[7271]: I0313 10:46:51.884649 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:52.378988 master-0 kubenswrapper[7271]: I0313 10:46:52.378944 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x_bfbaa57e-adac-48f8-8182-b4fdb42fbb9c/config-sync-controllers/0.log" Mar 13 10:46:52.379678 master-0 kubenswrapper[7271]: I0313 10:46:52.379556 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x_bfbaa57e-adac-48f8-8182-b4fdb42fbb9c/cluster-cloud-controller-manager/0.log" Mar 13 10:46:52.379744 master-0 kubenswrapper[7271]: I0313 10:46:52.379678 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" event={"ID":"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c","Type":"ContainerStarted","Data":"752ac798aba75bf6bd73e82523f71ef185438cd17acb52e4ed22823887dc5982"} Mar 13 10:46:52.381589 master-0 kubenswrapper[7271]: I0313 10:46:52.381546 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/1.log" Mar 13 10:46:52.382087 master-0 kubenswrapper[7271]: I0313 10:46:52.382049 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/0.log" Mar 13 10:46:52.382147 master-0 kubenswrapper[7271]: I0313 10:46:52.382099 7271 generic.go:334] "Generic (PLEG): container finished" podID="6622be09-206e-4d02-90ca-6d9f2fc852aa" containerID="000574ac95c46dea00d94f10637b547931d5cf4cebc923f39d6577d129f9a2fa" exitCode=1 Mar 13 10:46:52.382147 master-0 kubenswrapper[7271]: I0313 10:46:52.382136 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" event={"ID":"6622be09-206e-4d02-90ca-6d9f2fc852aa","Type":"ContainerDied","Data":"000574ac95c46dea00d94f10637b547931d5cf4cebc923f39d6577d129f9a2fa"} Mar 13 10:46:52.382250 master-0 kubenswrapper[7271]: I0313 10:46:52.382177 7271 scope.go:117] "RemoveContainer" containerID="426e576deb6604dde643ee98f5460b9f1475fda12e39205758c5b7f3ec56452f" Mar 13 10:46:52.383961 master-0 kubenswrapper[7271]: I0313 10:46:52.383895 7271 scope.go:117] "RemoveContainer" containerID="000574ac95c46dea00d94f10637b547931d5cf4cebc923f39d6577d129f9a2fa" Mar 13 10:46:52.384384 master-0 kubenswrapper[7271]: E0313 10:46:52.384341 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-cbhxt_openshift-cluster-storage-operator(6622be09-206e-4d02-90ca-6d9f2fc852aa)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" podUID="6622be09-206e-4d02-90ca-6d9f2fc852aa" Mar 13 10:46:52.884663 master-0 kubenswrapper[7271]: I0313 10:46:52.884540 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:52.884663 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:52.884663 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:52.884663 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:52.884663 master-0 kubenswrapper[7271]: I0313 10:46:52.884659 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:53.395841 master-0 kubenswrapper[7271]: I0313 10:46:53.395771 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/1.log" Mar 13 10:46:53.885915 master-0 kubenswrapper[7271]: I0313 10:46:53.885718 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:53.885915 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:53.885915 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:53.885915 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:53.885915 master-0 kubenswrapper[7271]: I0313 10:46:53.885825 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:54.883803 master-0 kubenswrapper[7271]: I0313 10:46:54.883722 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:54.883803 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:54.883803 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:54.883803 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:54.884522 master-0 kubenswrapper[7271]: I0313 10:46:54.883833 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:54.944917 master-0 kubenswrapper[7271]: E0313 10:46:54.944808 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:46:54.944917 master-0 kubenswrapper[7271]: E0313 10:46:54.944911 7271 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:46:55.884047 master-0 kubenswrapper[7271]: I0313 10:46:55.883940 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:55.884047 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:55.884047 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:55.884047 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:55.884772 master-0 kubenswrapper[7271]: I0313 10:46:55.884067 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:56.421385 master-0 kubenswrapper[7271]: I0313 10:46:56.421351 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-f46qd_257a4a8b-014c-4473-80a0-e95cf6d41bf1/manager/1.log" Mar 13 10:46:56.422029 master-0 kubenswrapper[7271]: I0313 10:46:56.422001 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-f46qd_257a4a8b-014c-4473-80a0-e95cf6d41bf1/manager/0.log" Mar 13 10:46:56.422492 master-0 kubenswrapper[7271]: I0313 10:46:56.422466 7271 generic.go:334] "Generic (PLEG): container finished" podID="257a4a8b-014c-4473-80a0-e95cf6d41bf1" containerID="505693401e0336c91ab91119b9f53889693ae2d79a1c0a657057ebc4d2c80fa9" exitCode=1 Mar 13 10:46:56.422566 master-0 kubenswrapper[7271]: I0313 10:46:56.422504 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" event={"ID":"257a4a8b-014c-4473-80a0-e95cf6d41bf1","Type":"ContainerDied","Data":"505693401e0336c91ab91119b9f53889693ae2d79a1c0a657057ebc4d2c80fa9"} Mar 13 10:46:56.422566 master-0 kubenswrapper[7271]: I0313 10:46:56.422539 7271 scope.go:117] "RemoveContainer" containerID="5f05908e71448e64ca18d1219369017d904e020901e65c57a4853144db037d28" Mar 13 10:46:56.423101 master-0 kubenswrapper[7271]: I0313 10:46:56.423085 7271 scope.go:117] "RemoveContainer" containerID="505693401e0336c91ab91119b9f53889693ae2d79a1c0a657057ebc4d2c80fa9" Mar 13 10:46:56.423278 master-0 kubenswrapper[7271]: E0313 10:46:56.423261 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-7f8b8b6f4c-f46qd_openshift-catalogd(257a4a8b-014c-4473-80a0-e95cf6d41bf1)\"" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" podUID="257a4a8b-014c-4473-80a0-e95cf6d41bf1" Mar 13 10:46:56.883207 master-0 kubenswrapper[7271]: I0313 10:46:56.883130 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:56.883207 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:56.883207 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:56.883207 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:56.883207 master-0 kubenswrapper[7271]: I0313 10:46:56.883206 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:57.436740 master-0 kubenswrapper[7271]: I0313 10:46:57.436565 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-f46qd_257a4a8b-014c-4473-80a0-e95cf6d41bf1/manager/1.log" Mar 13 10:46:57.439451 master-0 kubenswrapper[7271]: I0313 10:46:57.439385 7271 generic.go:334] "Generic (PLEG): container finished" podID="66f49a19-0e3b-4611-b8a6-5f5687fa20b6" containerID="2b215655327c77c15b5c8c962ef77f234a333c87823e067c5e476916a7abcdf5" exitCode=0 Mar 13 10:46:57.439647 master-0 kubenswrapper[7271]: I0313 10:46:57.439444 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" event={"ID":"66f49a19-0e3b-4611-b8a6-5f5687fa20b6","Type":"ContainerDied","Data":"2b215655327c77c15b5c8c962ef77f234a333c87823e067c5e476916a7abcdf5"} Mar 13 10:46:57.440375 master-0 kubenswrapper[7271]: I0313 10:46:57.440317 7271 scope.go:117] "RemoveContainer" containerID="2b215655327c77c15b5c8c962ef77f234a333c87823e067c5e476916a7abcdf5" Mar 13 10:46:57.883412 master-0 kubenswrapper[7271]: I0313 10:46:57.883309 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:57.883412 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:57.883412 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:57.883412 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:57.883707 master-0 kubenswrapper[7271]: I0313 10:46:57.883483 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:58.449068 master-0 kubenswrapper[7271]: I0313 10:46:58.449021 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" event={"ID":"66f49a19-0e3b-4611-b8a6-5f5687fa20b6","Type":"ContainerStarted","Data":"5e8fd25ffa963b486f5c4f508253a9a7ebbb066e433037c712f5375d7c178a40"} Mar 13 10:46:58.450130 master-0 kubenswrapper[7271]: I0313 10:46:58.450077 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:46:58.454097 master-0 kubenswrapper[7271]: I0313 10:46:58.454067 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:46:58.883359 master-0 kubenswrapper[7271]: I0313 10:46:58.883274 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:58.883359 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:58.883359 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:58.883359 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:58.883911 master-0 kubenswrapper[7271]: I0313 10:46:58.883367 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:46:59.646031 master-0 kubenswrapper[7271]: I0313 10:46:59.645838 7271 scope.go:117] "RemoveContainer" containerID="532b7ad139ce93c1c35843b63b26445192b618a7ded3e4c717123c6f472fec2b" Mar 13 10:46:59.647102 master-0 kubenswrapper[7271]: E0313 10:46:59.646290 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:46:59.882331 master-0 kubenswrapper[7271]: I0313 10:46:59.882278 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:46:59.882331 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:46:59.882331 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:46:59.882331 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:46:59.882640 master-0 kubenswrapper[7271]: I0313 10:46:59.882342 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:00.882529 master-0 kubenswrapper[7271]: I0313 10:47:00.882430 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:00.882529 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:00.882529 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:00.882529 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:00.882529 master-0 kubenswrapper[7271]: I0313 10:47:00.882528 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:01.883159 master-0 kubenswrapper[7271]: I0313 10:47:01.883107 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:01.883159 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:01.883159 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:01.883159 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:01.883889 master-0 kubenswrapper[7271]: I0313 10:47:01.883172 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:02.051033 master-0 kubenswrapper[7271]: I0313 10:47:02.050930 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:47:02.051858 master-0 kubenswrapper[7271]: I0313 10:47:02.051814 7271 scope.go:117] "RemoveContainer" containerID="505693401e0336c91ab91119b9f53889693ae2d79a1c0a657057ebc4d2c80fa9" Mar 13 10:47:02.052086 master-0 kubenswrapper[7271]: E0313 10:47:02.052050 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-7f8b8b6f4c-f46qd_openshift-catalogd(257a4a8b-014c-4473-80a0-e95cf6d41bf1)\"" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" podUID="257a4a8b-014c-4473-80a0-e95cf6d41bf1" Mar 13 10:47:02.646243 master-0 kubenswrapper[7271]: I0313 10:47:02.646179 7271 scope.go:117] "RemoveContainer" containerID="000574ac95c46dea00d94f10637b547931d5cf4cebc923f39d6577d129f9a2fa" Mar 13 10:47:02.882260 master-0 kubenswrapper[7271]: I0313 10:47:02.882201 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:02.882260 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:02.882260 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:02.882260 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:02.882552 master-0 kubenswrapper[7271]: I0313 10:47:02.882270 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:03.487414 master-0 kubenswrapper[7271]: I0313 10:47:03.487339 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/1.log" Mar 13 10:47:03.488072 master-0 kubenswrapper[7271]: I0313 10:47:03.487419 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" event={"ID":"6622be09-206e-4d02-90ca-6d9f2fc852aa","Type":"ContainerStarted","Data":"2659f17a2e54824976eda6b45b5b1088c2f8ddbf3c79f3eeaf4ba2530b687e1d"} Mar 13 10:47:03.496062 master-0 kubenswrapper[7271]: E0313 10:47:03.496009 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 13 10:47:03.884478 master-0 kubenswrapper[7271]: I0313 10:47:03.884408 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:03.884478 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:03.884478 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:03.884478 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:03.885016 master-0 kubenswrapper[7271]: I0313 10:47:03.884521 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:04.883190 master-0 kubenswrapper[7271]: I0313 10:47:04.883129 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:04.883190 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:04.883190 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:04.883190 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:04.883912 master-0 kubenswrapper[7271]: I0313 10:47:04.883202 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:05.112379 master-0 kubenswrapper[7271]: I0313 10:47:05.112282 7271 status_manager.go:851] "Failed to get status for pod" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods router-default-79f8cd6fdd-b4x54)" Mar 13 10:47:05.883328 master-0 kubenswrapper[7271]: I0313 10:47:05.883280 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:05.883328 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:05.883328 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:05.883328 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:05.884033 master-0 kubenswrapper[7271]: I0313 10:47:05.884003 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:06.883373 master-0 kubenswrapper[7271]: I0313 10:47:06.883329 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:06.883373 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:06.883373 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:06.883373 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:06.884175 master-0 kubenswrapper[7271]: I0313 10:47:06.884148 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:07.883288 master-0 kubenswrapper[7271]: I0313 10:47:07.883204 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:07.883288 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:07.883288 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:07.883288 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:07.884064 master-0 kubenswrapper[7271]: I0313 10:47:07.883290 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:08.884612 master-0 kubenswrapper[7271]: I0313 10:47:08.884502 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:08.884612 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:08.884612 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:08.884612 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:08.885830 master-0 kubenswrapper[7271]: I0313 10:47:08.884632 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:09.550919 master-0 kubenswrapper[7271]: I0313 10:47:09.550860 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-bg6zf_b10584c2-ef04-4649-bcb6-9222c9530c3f/manager/1.log" Mar 13 10:47:09.552025 master-0 kubenswrapper[7271]: I0313 10:47:09.551988 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-bg6zf_b10584c2-ef04-4649-bcb6-9222c9530c3f/manager/0.log" Mar 13 10:47:09.552111 master-0 kubenswrapper[7271]: I0313 10:47:09.552048 7271 generic.go:334] "Generic (PLEG): container finished" podID="b10584c2-ef04-4649-bcb6-9222c9530c3f" containerID="2a9aaa81e2cc4ad44480999dff8ac1b2c80678408fd67b6fb365310487f92570" exitCode=1 Mar 13 10:47:09.552111 master-0 kubenswrapper[7271]: I0313 10:47:09.552088 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" event={"ID":"b10584c2-ef04-4649-bcb6-9222c9530c3f","Type":"ContainerDied","Data":"2a9aaa81e2cc4ad44480999dff8ac1b2c80678408fd67b6fb365310487f92570"} Mar 13 10:47:09.552207 master-0 kubenswrapper[7271]: I0313 10:47:09.552131 7271 scope.go:117] "RemoveContainer" containerID="f661d164e1cae288da9b5b814f572be1703c2513d35aac45b2b22784229191e4" Mar 13 10:47:09.553307 master-0 kubenswrapper[7271]: I0313 10:47:09.553265 7271 scope.go:117] "RemoveContainer" containerID="2a9aaa81e2cc4ad44480999dff8ac1b2c80678408fd67b6fb365310487f92570" Mar 13 10:47:09.553603 master-0 kubenswrapper[7271]: E0313 10:47:09.553550 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-6598bfb6c4-bg6zf_openshift-operator-controller(b10584c2-ef04-4649-bcb6-9222c9530c3f)\"" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" podUID="b10584c2-ef04-4649-bcb6-9222c9530c3f" Mar 13 10:47:09.883247 master-0 kubenswrapper[7271]: I0313 10:47:09.883042 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:09.883247 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:09.883247 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:09.883247 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:09.883247 master-0 kubenswrapper[7271]: I0313 10:47:09.883150 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:10.559881 master-0 kubenswrapper[7271]: I0313 10:47:10.559833 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-bg6zf_b10584c2-ef04-4649-bcb6-9222c9530c3f/manager/1.log" Mar 13 10:47:10.883733 master-0 kubenswrapper[7271]: I0313 10:47:10.883628 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:10.883733 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:10.883733 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:10.883733 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:10.883733 master-0 kubenswrapper[7271]: I0313 10:47:10.883683 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:11.883260 master-0 kubenswrapper[7271]: I0313 10:47:11.883073 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:11.883260 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:11.883260 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:11.883260 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:11.883260 master-0 kubenswrapper[7271]: I0313 10:47:11.883164 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:12.050436 master-0 kubenswrapper[7271]: I0313 10:47:12.050325 7271 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:47:12.051529 master-0 kubenswrapper[7271]: I0313 10:47:12.051484 7271 scope.go:117] "RemoveContainer" containerID="505693401e0336c91ab91119b9f53889693ae2d79a1c0a657057ebc4d2c80fa9" Mar 13 10:47:12.581203 master-0 kubenswrapper[7271]: I0313 10:47:12.581175 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-f46qd_257a4a8b-014c-4473-80a0-e95cf6d41bf1/manager/1.log" Mar 13 10:47:12.582144 master-0 kubenswrapper[7271]: I0313 10:47:12.582094 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" event={"ID":"257a4a8b-014c-4473-80a0-e95cf6d41bf1","Type":"ContainerStarted","Data":"6b50b66b0390faa577d10d9d1c7e8316f0fb5fecf215c465d87e54304fc85a69"} Mar 13 10:47:12.582357 master-0 kubenswrapper[7271]: I0313 10:47:12.582337 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:47:12.884242 master-0 kubenswrapper[7271]: I0313 10:47:12.884018 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:12.884242 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:12.884242 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:12.884242 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:12.884242 master-0 kubenswrapper[7271]: I0313 10:47:12.884088 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:13.883031 master-0 kubenswrapper[7271]: I0313 10:47:13.882923 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:13.883031 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:13.883031 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:13.883031 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:13.883348 master-0 kubenswrapper[7271]: I0313 10:47:13.883075 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:14.646291 master-0 kubenswrapper[7271]: I0313 10:47:14.646161 7271 scope.go:117] "RemoveContainer" containerID="532b7ad139ce93c1c35843b63b26445192b618a7ded3e4c717123c6f472fec2b" Mar 13 10:47:14.647106 master-0 kubenswrapper[7271]: E0313 10:47:14.646755 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:47:14.882530 master-0 kubenswrapper[7271]: I0313 10:47:14.882423 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:14.882530 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:14.882530 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:14.882530 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:14.882530 master-0 kubenswrapper[7271]: I0313 10:47:14.882504 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:15.007995 master-0 kubenswrapper[7271]: E0313 10:47:15.007726 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:47:04Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:47:04Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:47:04Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:47:04Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4855408bd0e4d0711383d0c14dcad53c98255ff9f83f6cbefb57e47eacc1f1f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:97bdbb5854e4ad7976209a44cff02c8a2b9542f58ad007c06a5c3a5e8266def1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284762325},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8c978bb5c329452b181f61f00452b4c2bfd83d245db56050bc7607972a791a76\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:e6567accc084db971e077b5ca666357e3a326fa27f69fc7135a5bc2e19f998eb\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221745369},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:47:15.079112 master-0 kubenswrapper[7271]: E0313 10:47:15.078960 7271 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c60bbb1b96028 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:45:12.533999656 +0000 UTC m=+567.060822086,LastTimestamp:2026-03-13 10:45:17.217490047 +0000 UTC m=+571.744312447,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:47:15.882880 master-0 kubenswrapper[7271]: I0313 10:47:15.882780 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:15.882880 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:15.882880 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:15.882880 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:15.883606 master-0 kubenswrapper[7271]: I0313 10:47:15.882904 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:16.853550 master-0 kubenswrapper[7271]: I0313 10:47:16.853489 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:47:16.854149 master-0 kubenswrapper[7271]: I0313 10:47:16.854122 7271 scope.go:117] "RemoveContainer" containerID="2a9aaa81e2cc4ad44480999dff8ac1b2c80678408fd67b6fb365310487f92570" Mar 13 10:47:16.854397 master-0 kubenswrapper[7271]: E0313 10:47:16.854367 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-6598bfb6c4-bg6zf_openshift-operator-controller(b10584c2-ef04-4649-bcb6-9222c9530c3f)\"" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" podUID="b10584c2-ef04-4649-bcb6-9222c9530c3f" Mar 13 10:47:16.883106 master-0 kubenswrapper[7271]: I0313 10:47:16.883043 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:16.883106 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:16.883106 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:16.883106 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:16.883106 master-0 kubenswrapper[7271]: I0313 10:47:16.883091 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:17.883720 master-0 kubenswrapper[7271]: I0313 10:47:17.883649 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:17.883720 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:17.883720 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:17.883720 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:17.883720 master-0 kubenswrapper[7271]: I0313 10:47:17.883727 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:18.883997 master-0 kubenswrapper[7271]: I0313 10:47:18.883933 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:18.883997 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:18.883997 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:18.883997 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:18.884608 master-0 kubenswrapper[7271]: I0313 10:47:18.884011 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:19.884781 master-0 kubenswrapper[7271]: I0313 10:47:19.884645 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:19.884781 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:19.884781 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:19.884781 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:19.886021 master-0 kubenswrapper[7271]: I0313 10:47:19.884782 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:19.898118 master-0 kubenswrapper[7271]: E0313 10:47:19.897996 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 10:47:20.884573 master-0 kubenswrapper[7271]: I0313 10:47:20.884484 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:20.884573 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:20.884573 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:20.884573 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:20.885578 master-0 kubenswrapper[7271]: I0313 10:47:20.884573 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:21.883393 master-0 kubenswrapper[7271]: I0313 10:47:21.883327 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:21.883393 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:21.883393 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:21.883393 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:21.883393 master-0 kubenswrapper[7271]: I0313 10:47:21.883392 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:22.053554 master-0 kubenswrapper[7271]: I0313 10:47:22.053439 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:47:22.883414 master-0 kubenswrapper[7271]: I0313 10:47:22.883324 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:22.883414 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:22.883414 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:22.883414 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:22.883771 master-0 kubenswrapper[7271]: I0313 10:47:22.883454 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:23.882780 master-0 kubenswrapper[7271]: I0313 10:47:23.882684 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:23.882780 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:23.882780 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:23.882780 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:23.883510 master-0 kubenswrapper[7271]: I0313 10:47:23.882790 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:24.355084 master-0 kubenswrapper[7271]: E0313 10:47:24.354977 7271 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 10:47:24.661936 master-0 kubenswrapper[7271]: I0313 10:47:24.661873 7271 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="b184c6d2b52d3742ae6eeca434d2692ca2f0557fa56d061b66512b5f8dfea300" exitCode=0 Mar 13 10:47:24.661936 master-0 kubenswrapper[7271]: I0313 10:47:24.661920 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"b184c6d2b52d3742ae6eeca434d2692ca2f0557fa56d061b66512b5f8dfea300"} Mar 13 10:47:24.662260 master-0 kubenswrapper[7271]: I0313 10:47:24.662233 7271 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:47:24.662260 master-0 kubenswrapper[7271]: I0313 10:47:24.662252 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:47:24.883880 master-0 kubenswrapper[7271]: I0313 10:47:24.883709 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:24.883880 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:24.883880 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:24.883880 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:24.883880 master-0 kubenswrapper[7271]: I0313 10:47:24.883840 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:25.008994 master-0 kubenswrapper[7271]: E0313 10:47:25.008875 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:47:25.882397 master-0 kubenswrapper[7271]: I0313 10:47:25.882344 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:25.882397 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:25.882397 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:25.882397 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:25.882717 master-0 kubenswrapper[7271]: I0313 10:47:25.882404 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:26.853372 master-0 kubenswrapper[7271]: I0313 10:47:26.853287 7271 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:47:26.854530 master-0 kubenswrapper[7271]: I0313 10:47:26.854210 7271 scope.go:117] "RemoveContainer" containerID="2a9aaa81e2cc4ad44480999dff8ac1b2c80678408fd67b6fb365310487f92570" Mar 13 10:47:26.883644 master-0 kubenswrapper[7271]: I0313 10:47:26.883567 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:26.883644 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:26.883644 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:26.883644 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:26.883997 master-0 kubenswrapper[7271]: I0313 10:47:26.883682 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:27.684285 master-0 kubenswrapper[7271]: I0313 10:47:27.684226 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-bg6zf_b10584c2-ef04-4649-bcb6-9222c9530c3f/manager/1.log" Mar 13 10:47:27.684807 master-0 kubenswrapper[7271]: I0313 10:47:27.684771 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" event={"ID":"b10584c2-ef04-4649-bcb6-9222c9530c3f","Type":"ContainerStarted","Data":"03b1bece82c3b7ecfcedf26ed349256f268bf4773553f02e8e66e16d148a3a1f"} Mar 13 10:47:27.685080 master-0 kubenswrapper[7271]: I0313 10:47:27.685050 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:47:27.882878 master-0 kubenswrapper[7271]: I0313 10:47:27.882823 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:27.882878 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:27.882878 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:27.882878 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:27.883522 master-0 kubenswrapper[7271]: I0313 10:47:27.882887 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:28.882429 master-0 kubenswrapper[7271]: I0313 10:47:28.882346 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:28.882429 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:28.882429 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:28.882429 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:28.882806 master-0 kubenswrapper[7271]: I0313 10:47:28.882422 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:29.646099 master-0 kubenswrapper[7271]: I0313 10:47:29.646033 7271 scope.go:117] "RemoveContainer" containerID="532b7ad139ce93c1c35843b63b26445192b618a7ded3e4c717123c6f472fec2b" Mar 13 10:47:29.646898 master-0 kubenswrapper[7271]: E0313 10:47:29.646840 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:47:29.882931 master-0 kubenswrapper[7271]: I0313 10:47:29.882853 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:29.882931 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:29.882931 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:29.882931 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:29.883408 master-0 kubenswrapper[7271]: I0313 10:47:29.882959 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:30.883300 master-0 kubenswrapper[7271]: I0313 10:47:30.883208 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:30.883300 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:30.883300 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:30.883300 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:30.883964 master-0 kubenswrapper[7271]: I0313 10:47:30.883336 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:31.883041 master-0 kubenswrapper[7271]: I0313 10:47:31.882996 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:31.883041 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:31.883041 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:31.883041 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:31.883318 master-0 kubenswrapper[7271]: I0313 10:47:31.883053 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:32.750206 master-0 kubenswrapper[7271]: I0313 10:47:32.750147 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-gsr52_070b85a0-f076-4750-aa00-dabba401dc75/cluster-baremetal-operator/0.log" Mar 13 10:47:32.750805 master-0 kubenswrapper[7271]: I0313 10:47:32.750221 7271 generic.go:334] "Generic (PLEG): container finished" podID="070b85a0-f076-4750-aa00-dabba401dc75" containerID="594cd9998ea936cf92d6d0f81aec77530767beeb227080ba41181e70dc234520" exitCode=1 Mar 13 10:47:32.750805 master-0 kubenswrapper[7271]: I0313 10:47:32.750345 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" event={"ID":"070b85a0-f076-4750-aa00-dabba401dc75","Type":"ContainerDied","Data":"594cd9998ea936cf92d6d0f81aec77530767beeb227080ba41181e70dc234520"} Mar 13 10:47:32.751242 master-0 kubenswrapper[7271]: I0313 10:47:32.751205 7271 scope.go:117] "RemoveContainer" containerID="594cd9998ea936cf92d6d0f81aec77530767beeb227080ba41181e70dc234520" Mar 13 10:47:32.752347 master-0 kubenswrapper[7271]: I0313 10:47:32.752149 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-hszft_484e6d0b-d057-4658-8e49-bbe7e6f6ee86/control-plane-machine-set-operator/0.log" Mar 13 10:47:32.752347 master-0 kubenswrapper[7271]: I0313 10:47:32.752210 7271 generic.go:334] "Generic (PLEG): container finished" podID="484e6d0b-d057-4658-8e49-bbe7e6f6ee86" containerID="06f340bfe3defa99f6d96411a1e67581d7833b82a603be2ce7a6f91338e36131" exitCode=1 Mar 13 10:47:32.752347 master-0 kubenswrapper[7271]: I0313 10:47:32.752241 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" event={"ID":"484e6d0b-d057-4658-8e49-bbe7e6f6ee86","Type":"ContainerDied","Data":"06f340bfe3defa99f6d96411a1e67581d7833b82a603be2ce7a6f91338e36131"} Mar 13 10:47:32.752710 master-0 kubenswrapper[7271]: I0313 10:47:32.752675 7271 scope.go:117] "RemoveContainer" containerID="06f340bfe3defa99f6d96411a1e67581d7833b82a603be2ce7a6f91338e36131" Mar 13 10:47:32.882897 master-0 kubenswrapper[7271]: I0313 10:47:32.882853 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:32.882897 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:32.882897 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:32.882897 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:32.883075 master-0 kubenswrapper[7271]: I0313 10:47:32.882908 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:33.759965 master-0 kubenswrapper[7271]: I0313 10:47:33.759910 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/2.log" Mar 13 10:47:33.760538 master-0 kubenswrapper[7271]: I0313 10:47:33.760448 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/1.log" Mar 13 10:47:33.760538 master-0 kubenswrapper[7271]: I0313 10:47:33.760487 7271 generic.go:334] "Generic (PLEG): container finished" podID="6622be09-206e-4d02-90ca-6d9f2fc852aa" containerID="2659f17a2e54824976eda6b45b5b1088c2f8ddbf3c79f3eeaf4ba2530b687e1d" exitCode=1 Mar 13 10:47:33.760642 master-0 kubenswrapper[7271]: I0313 10:47:33.760540 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" event={"ID":"6622be09-206e-4d02-90ca-6d9f2fc852aa","Type":"ContainerDied","Data":"2659f17a2e54824976eda6b45b5b1088c2f8ddbf3c79f3eeaf4ba2530b687e1d"} Mar 13 10:47:33.760642 master-0 kubenswrapper[7271]: I0313 10:47:33.760572 7271 scope.go:117] "RemoveContainer" containerID="000574ac95c46dea00d94f10637b547931d5cf4cebc923f39d6577d129f9a2fa" Mar 13 10:47:33.761405 master-0 kubenswrapper[7271]: I0313 10:47:33.761356 7271 scope.go:117] "RemoveContainer" containerID="2659f17a2e54824976eda6b45b5b1088c2f8ddbf3c79f3eeaf4ba2530b687e1d" Mar 13 10:47:33.761619 master-0 kubenswrapper[7271]: E0313 10:47:33.761557 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-cbhxt_openshift-cluster-storage-operator(6622be09-206e-4d02-90ca-6d9f2fc852aa)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" podUID="6622be09-206e-4d02-90ca-6d9f2fc852aa" Mar 13 10:47:33.763026 master-0 kubenswrapper[7271]: I0313 10:47:33.762949 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-gsr52_070b85a0-f076-4750-aa00-dabba401dc75/cluster-baremetal-operator/0.log" Mar 13 10:47:33.763101 master-0 kubenswrapper[7271]: I0313 10:47:33.763064 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" event={"ID":"070b85a0-f076-4750-aa00-dabba401dc75","Type":"ContainerStarted","Data":"d57d698ca8efd80b0c40df921aa32d90fdd37b423f221fd31fb6e45b5640ad03"} Mar 13 10:47:33.766639 master-0 kubenswrapper[7271]: I0313 10:47:33.766582 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-hszft_484e6d0b-d057-4658-8e49-bbe7e6f6ee86/control-plane-machine-set-operator/0.log" Mar 13 10:47:33.766639 master-0 kubenswrapper[7271]: I0313 10:47:33.766646 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" event={"ID":"484e6d0b-d057-4658-8e49-bbe7e6f6ee86","Type":"ContainerStarted","Data":"386e5852b0cfa7fe85b23f6cf7ed2421564788e77353051711340d89f67cd47c"} Mar 13 10:47:33.883190 master-0 kubenswrapper[7271]: I0313 10:47:33.883135 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:33.883190 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:33.883190 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:33.883190 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:33.883475 master-0 kubenswrapper[7271]: I0313 10:47:33.883196 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:34.774627 master-0 kubenswrapper[7271]: I0313 10:47:34.774449 7271 generic.go:334] "Generic (PLEG): container finished" podID="1c12a5d5-711f-4663-974c-c4b06e15fc39" containerID="3711f960c560ecb4568aab641312d36db294714abc5c774ce0693e59fb2ba6d8" exitCode=0 Mar 13 10:47:34.774627 master-0 kubenswrapper[7271]: I0313 10:47:34.774543 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" event={"ID":"1c12a5d5-711f-4663-974c-c4b06e15fc39","Type":"ContainerDied","Data":"3711f960c560ecb4568aab641312d36db294714abc5c774ce0693e59fb2ba6d8"} Mar 13 10:47:34.775415 master-0 kubenswrapper[7271]: I0313 10:47:34.775180 7271 scope.go:117] "RemoveContainer" containerID="3711f960c560ecb4568aab641312d36db294714abc5c774ce0693e59fb2ba6d8" Mar 13 10:47:34.780010 master-0 kubenswrapper[7271]: I0313 10:47:34.779965 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/2.log" Mar 13 10:47:34.883351 master-0 kubenswrapper[7271]: I0313 10:47:34.883278 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:34.883351 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:34.883351 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:34.883351 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:34.883682 master-0 kubenswrapper[7271]: I0313 10:47:34.883383 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:35.010160 master-0 kubenswrapper[7271]: E0313 10:47:35.010069 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:47:35.787848 master-0 kubenswrapper[7271]: I0313 10:47:35.787784 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" event={"ID":"1c12a5d5-711f-4663-974c-c4b06e15fc39","Type":"ContainerStarted","Data":"e414a78858ab4ef79175a7f973f3e4d79ae8e8dcae65a85e1b5f6c53e7811bd7"} Mar 13 10:47:35.882522 master-0 kubenswrapper[7271]: I0313 10:47:35.882451 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:35.882522 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:35.882522 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:35.882522 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:35.882954 master-0 kubenswrapper[7271]: I0313 10:47:35.882530 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:36.795188 master-0 kubenswrapper[7271]: I0313 10:47:36.795048 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-jcn8f_ec121f87-93ea-468c-a25f-2ec5e7d0e0ee/machine-approver-controller/0.log" Mar 13 10:47:36.795932 master-0 kubenswrapper[7271]: I0313 10:47:36.795460 7271 generic.go:334] "Generic (PLEG): container finished" podID="ec121f87-93ea-468c-a25f-2ec5e7d0e0ee" containerID="64678ebcb68e6bed917a1b002aba4f9986d59e81a6fdab83010f8da8b3807323" exitCode=255 Mar 13 10:47:36.795932 master-0 kubenswrapper[7271]: I0313 10:47:36.795525 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" event={"ID":"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee","Type":"ContainerDied","Data":"64678ebcb68e6bed917a1b002aba4f9986d59e81a6fdab83010f8da8b3807323"} Mar 13 10:47:36.796707 master-0 kubenswrapper[7271]: I0313 10:47:36.796663 7271 scope.go:117] "RemoveContainer" containerID="64678ebcb68e6bed917a1b002aba4f9986d59e81a6fdab83010f8da8b3807323" Mar 13 10:47:36.855909 master-0 kubenswrapper[7271]: I0313 10:47:36.855864 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:47:36.883706 master-0 kubenswrapper[7271]: I0313 10:47:36.883654 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:36.883706 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:36.883706 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:36.883706 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:36.884084 master-0 kubenswrapper[7271]: I0313 10:47:36.883726 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:36.899288 master-0 kubenswrapper[7271]: E0313 10:47:36.899239 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 13 10:47:37.806500 master-0 kubenswrapper[7271]: I0313 10:47:37.806437 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-jcn8f_ec121f87-93ea-468c-a25f-2ec5e7d0e0ee/machine-approver-controller/0.log" Mar 13 10:47:37.807258 master-0 kubenswrapper[7271]: I0313 10:47:37.806925 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" event={"ID":"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee","Type":"ContainerStarted","Data":"cafb65703fe959fb5425130fa2537f7db46e001e08726e36ebf54de34f2ef5e8"} Mar 13 10:47:37.882039 master-0 kubenswrapper[7271]: I0313 10:47:37.881960 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:47:37.882039 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:47:37.882039 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:47:37.882039 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:47:37.882336 master-0 kubenswrapper[7271]: I0313 10:47:37.882061 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:47:37.882336 master-0 kubenswrapper[7271]: I0313 10:47:37.882115 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:47:37.882840 master-0 kubenswrapper[7271]: I0313 10:47:37.882808 7271 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"355bba8a4cefe5a34bf9903f07fd7230c56e2657d48a952a7979a55c45edb0b5"} pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" containerMessage="Container router failed startup probe, will be restarted" Mar 13 10:47:37.882920 master-0 kubenswrapper[7271]: I0313 10:47:37.882855 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" containerID="cri-o://355bba8a4cefe5a34bf9903f07fd7230c56e2657d48a952a7979a55c45edb0b5" gracePeriod=3600 Mar 13 10:47:40.828764 master-0 kubenswrapper[7271]: I0313 10:47:40.828668 7271 generic.go:334] "Generic (PLEG): container finished" podID="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" containerID="3e37f1f22df5c284c9d1ba661521c6c1d227be08ffa00372db4208f240cca432" exitCode=0 Mar 13 10:47:40.828764 master-0 kubenswrapper[7271]: I0313 10:47:40.828716 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" event={"ID":"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc","Type":"ContainerDied","Data":"3e37f1f22df5c284c9d1ba661521c6c1d227be08ffa00372db4208f240cca432"} Mar 13 10:47:40.829413 master-0 kubenswrapper[7271]: I0313 10:47:40.829245 7271 scope.go:117] "RemoveContainer" containerID="3e37f1f22df5c284c9d1ba661521c6c1d227be08ffa00372db4208f240cca432" Mar 13 10:47:41.836304 master-0 kubenswrapper[7271]: I0313 10:47:41.836252 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" event={"ID":"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc","Type":"ContainerStarted","Data":"f725fe2be0bf9af36703aa7d3255f2363495d32e85e10bf8735ca5097115d77e"} Mar 13 10:47:41.837165 master-0 kubenswrapper[7271]: I0313 10:47:41.837135 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:47:41.840046 master-0 kubenswrapper[7271]: I0313 10:47:41.840012 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:47:44.645739 master-0 kubenswrapper[7271]: I0313 10:47:44.645672 7271 scope.go:117] "RemoveContainer" containerID="532b7ad139ce93c1c35843b63b26445192b618a7ded3e4c717123c6f472fec2b" Mar 13 10:47:44.646328 master-0 kubenswrapper[7271]: E0313 10:47:44.645959 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:47:45.010971 master-0 kubenswrapper[7271]: E0313 10:47:45.010895 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:47:45.870758 master-0 kubenswrapper[7271]: I0313 10:47:45.870729 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler/0.log" Mar 13 10:47:45.871714 master-0 kubenswrapper[7271]: I0313 10:47:45.871677 7271 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675" exitCode=1 Mar 13 10:47:45.871799 master-0 kubenswrapper[7271]: I0313 10:47:45.871724 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerDied","Data":"43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675"} Mar 13 10:47:45.872315 master-0 kubenswrapper[7271]: I0313 10:47:45.872284 7271 scope.go:117] "RemoveContainer" containerID="43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675" Mar 13 10:47:46.880111 master-0 kubenswrapper[7271]: I0313 10:47:46.880063 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler/0.log" Mar 13 10:47:46.880773 master-0 kubenswrapper[7271]: I0313 10:47:46.880487 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2"} Mar 13 10:47:46.880952 master-0 kubenswrapper[7271]: I0313 10:47:46.880894 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:47:48.645609 master-0 kubenswrapper[7271]: I0313 10:47:48.645534 7271 scope.go:117] "RemoveContainer" containerID="2659f17a2e54824976eda6b45b5b1088c2f8ddbf3c79f3eeaf4ba2530b687e1d" Mar 13 10:47:48.646284 master-0 kubenswrapper[7271]: E0313 10:47:48.645858 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-cbhxt_openshift-cluster-storage-operator(6622be09-206e-4d02-90ca-6d9f2fc852aa)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" podUID="6622be09-206e-4d02-90ca-6d9f2fc852aa" Mar 13 10:47:49.081474 master-0 kubenswrapper[7271]: E0313 10:47:49.081296 7271 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c60bbb1b96028 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:45:12.533999656 +0000 UTC m=+567.060822086,LastTimestamp:2026-03-13 10:45:17.583695246 +0000 UTC m=+572.110517646,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:47:53.900903 master-0 kubenswrapper[7271]: E0313 10:47:53.900555 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 10:47:55.012056 master-0 kubenswrapper[7271]: E0313 10:47:55.011934 7271 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 10:47:55.012056 master-0 kubenswrapper[7271]: E0313 10:47:55.012000 7271 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:47:57.645808 master-0 kubenswrapper[7271]: I0313 10:47:57.645734 7271 scope.go:117] "RemoveContainer" containerID="532b7ad139ce93c1c35843b63b26445192b618a7ded3e4c717123c6f472fec2b" Mar 13 10:47:57.646415 master-0 kubenswrapper[7271]: E0313 10:47:57.646061 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:47:58.665196 master-0 kubenswrapper[7271]: E0313 10:47:58.665075 7271 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 10:47:58.983268 master-0 kubenswrapper[7271]: I0313 10:47:58.983133 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"ca7636226884dc934652ea1520a35839a32c066fbf42abbabb2eb40d4d464bfd"} Mar 13 10:47:59.646424 master-0 kubenswrapper[7271]: I0313 10:47:59.645843 7271 scope.go:117] "RemoveContainer" containerID="2659f17a2e54824976eda6b45b5b1088c2f8ddbf3c79f3eeaf4ba2530b687e1d" Mar 13 10:47:59.992071 master-0 kubenswrapper[7271]: I0313 10:47:59.992017 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/2.log" Mar 13 10:47:59.992919 master-0 kubenswrapper[7271]: I0313 10:47:59.992183 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" event={"ID":"6622be09-206e-4d02-90ca-6d9f2fc852aa","Type":"ContainerStarted","Data":"5e8801f6c03277ad0f15ce8a685fd31f58e857c66d9382e667630e4deb5cc346"} Mar 13 10:47:59.996646 master-0 kubenswrapper[7271]: I0313 10:47:59.996591 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"5a793ad00a79db57ae38050e7749ba9d9b9d24a798febba0cba49980889c7482"} Mar 13 10:47:59.996646 master-0 kubenswrapper[7271]: I0313 10:47:59.996648 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"7dca9ce0c495134e155aab91ff3f2ccfbf29b25d2e905ee8170df03b7df6823b"} Mar 13 10:47:59.996821 master-0 kubenswrapper[7271]: I0313 10:47:59.996662 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"2f6be16b300f0db83df5af0658e94350a338f2488c82335f15c838e841d5ec1e"} Mar 13 10:47:59.996821 master-0 kubenswrapper[7271]: I0313 10:47:59.996672 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"669c949c8fe3c563ab473f1617a1daafb359deef2739ada0b41fbbdd93bb8d46"} Mar 13 10:47:59.997013 master-0 kubenswrapper[7271]: I0313 10:47:59.996973 7271 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:47:59.997013 master-0 kubenswrapper[7271]: I0313 10:47:59.997012 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:48:04.664873 master-0 kubenswrapper[7271]: I0313 10:48:04.664765 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 13 10:48:04.664873 master-0 kubenswrapper[7271]: I0313 10:48:04.664862 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 13 10:48:05.051692 master-0 kubenswrapper[7271]: I0313 10:48:05.051631 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/3.log" Mar 13 10:48:05.052569 master-0 kubenswrapper[7271]: I0313 10:48:05.052530 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/2.log" Mar 13 10:48:05.053458 master-0 kubenswrapper[7271]: I0313 10:48:05.053394 7271 generic.go:334] "Generic (PLEG): container finished" podID="7667717b-fb74-456b-8615-16475cb69e98" containerID="ea6075437ddab13db72254693b1402fadb6322d2c4a635387e569d11ef32e573" exitCode=1 Mar 13 10:48:05.053458 master-0 kubenswrapper[7271]: I0313 10:48:05.053438 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerDied","Data":"ea6075437ddab13db72254693b1402fadb6322d2c4a635387e569d11ef32e573"} Mar 13 10:48:05.053669 master-0 kubenswrapper[7271]: I0313 10:48:05.053500 7271 scope.go:117] "RemoveContainer" containerID="aedbabca0ae1386209b376e594af5a1aca17689f565bb27119f58a8f09e1fc7c" Mar 13 10:48:05.054573 master-0 kubenswrapper[7271]: I0313 10:48:05.054528 7271 scope.go:117] "RemoveContainer" containerID="ea6075437ddab13db72254693b1402fadb6322d2c4a635387e569d11ef32e573" Mar 13 10:48:05.055007 master-0 kubenswrapper[7271]: E0313 10:48:05.054956 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:48:05.113469 master-0 kubenswrapper[7271]: I0313 10:48:05.113367 7271 status_manager.go:851] "Failed to get status for pod" podUID="1453f6461bf5d599ad65a4656343ee91" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)" Mar 13 10:48:06.061381 master-0 kubenswrapper[7271]: I0313 10:48:06.061315 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/3.log" Mar 13 10:48:08.645114 master-0 kubenswrapper[7271]: I0313 10:48:08.645039 7271 scope.go:117] "RemoveContainer" containerID="532b7ad139ce93c1c35843b63b26445192b618a7ded3e4c717123c6f472fec2b" Mar 13 10:48:09.093932 master-0 kubenswrapper[7271]: I0313 10:48:09.093865 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8"} Mar 13 10:48:10.901805 master-0 kubenswrapper[7271]: E0313 10:48:10.901713 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 10:48:13.795561 master-0 kubenswrapper[7271]: I0313 10:48:13.795421 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:48:14.685835 master-0 kubenswrapper[7271]: I0313 10:48:14.685747 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 13 10:48:16.647161 master-0 kubenswrapper[7271]: I0313 10:48:16.647090 7271 scope.go:117] "RemoveContainer" containerID="ea6075437ddab13db72254693b1402fadb6322d2c4a635387e569d11ef32e573" Mar 13 10:48:16.647800 master-0 kubenswrapper[7271]: E0313 10:48:16.647335 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:48:17.468778 master-0 kubenswrapper[7271]: I0313 10:48:17.468716 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:48:19.679511 master-0 kubenswrapper[7271]: I0313 10:48:19.679455 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 13 10:48:20.469295 master-0 kubenswrapper[7271]: I0313 10:48:20.469212 7271 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 10:48:24.203855 master-0 kubenswrapper[7271]: I0313 10:48:24.203672 7271 generic.go:334] "Generic (PLEG): container finished" podID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerID="355bba8a4cefe5a34bf9903f07fd7230c56e2657d48a952a7979a55c45edb0b5" exitCode=0 Mar 13 10:48:24.203855 master-0 kubenswrapper[7271]: I0313 10:48:24.203780 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" event={"ID":"eb778c86-ea51-4eab-82b8-a8e0bec0f050","Type":"ContainerDied","Data":"355bba8a4cefe5a34bf9903f07fd7230c56e2657d48a952a7979a55c45edb0b5"} Mar 13 10:48:24.205022 master-0 kubenswrapper[7271]: I0313 10:48:24.203899 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" event={"ID":"eb778c86-ea51-4eab-82b8-a8e0bec0f050","Type":"ContainerStarted","Data":"b7092e3092801ee7cac052ee6ef29cb5b3962e6dfa9253411baa18d8c09d2942"} Mar 13 10:48:24.205022 master-0 kubenswrapper[7271]: I0313 10:48:24.203950 7271 scope.go:117] "RemoveContainer" containerID="9aabceaa9098fa374fa3be7884e41fb57131871ca89880498f237e8d19971731" Mar 13 10:48:24.880339 master-0 kubenswrapper[7271]: I0313 10:48:24.880248 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:48:24.882566 master-0 kubenswrapper[7271]: I0313 10:48:24.882534 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:24.882566 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:24.882566 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:24.882566 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:24.882747 master-0 kubenswrapper[7271]: I0313 10:48:24.882599 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:25.883189 master-0 kubenswrapper[7271]: I0313 10:48:25.883125 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:25.883189 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:25.883189 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:25.883189 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:25.883794 master-0 kubenswrapper[7271]: I0313 10:48:25.883203 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:26.883957 master-0 kubenswrapper[7271]: I0313 10:48:26.883883 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:26.883957 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:26.883957 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:26.883957 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:26.884553 master-0 kubenswrapper[7271]: I0313 10:48:26.883978 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:27.882972 master-0 kubenswrapper[7271]: I0313 10:48:27.882904 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:27.882972 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:27.882972 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:27.882972 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:27.882972 master-0 kubenswrapper[7271]: I0313 10:48:27.882963 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:27.903583 master-0 kubenswrapper[7271]: E0313 10:48:27.903463 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 10:48:28.883278 master-0 kubenswrapper[7271]: I0313 10:48:28.883001 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:28.883278 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:28.883278 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:28.883278 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:28.883278 master-0 kubenswrapper[7271]: I0313 10:48:28.883101 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:29.646037 master-0 kubenswrapper[7271]: I0313 10:48:29.645917 7271 scope.go:117] "RemoveContainer" containerID="ea6075437ddab13db72254693b1402fadb6322d2c4a635387e569d11ef32e573" Mar 13 10:48:29.647152 master-0 kubenswrapper[7271]: E0313 10:48:29.646207 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:48:29.883729 master-0 kubenswrapper[7271]: I0313 10:48:29.883666 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:29.883729 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:29.883729 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:29.883729 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:29.884351 master-0 kubenswrapper[7271]: I0313 10:48:29.884288 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:30.254636 master-0 kubenswrapper[7271]: I0313 10:48:30.254527 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/3.log" Mar 13 10:48:30.255023 master-0 kubenswrapper[7271]: I0313 10:48:30.254987 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/2.log" Mar 13 10:48:30.255122 master-0 kubenswrapper[7271]: I0313 10:48:30.255039 7271 generic.go:334] "Generic (PLEG): container finished" podID="6622be09-206e-4d02-90ca-6d9f2fc852aa" containerID="5e8801f6c03277ad0f15ce8a685fd31f58e857c66d9382e667630e4deb5cc346" exitCode=1 Mar 13 10:48:30.255122 master-0 kubenswrapper[7271]: I0313 10:48:30.255072 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" event={"ID":"6622be09-206e-4d02-90ca-6d9f2fc852aa","Type":"ContainerDied","Data":"5e8801f6c03277ad0f15ce8a685fd31f58e857c66d9382e667630e4deb5cc346"} Mar 13 10:48:30.255122 master-0 kubenswrapper[7271]: I0313 10:48:30.255109 7271 scope.go:117] "RemoveContainer" containerID="2659f17a2e54824976eda6b45b5b1088c2f8ddbf3c79f3eeaf4ba2530b687e1d" Mar 13 10:48:30.256126 master-0 kubenswrapper[7271]: I0313 10:48:30.256083 7271 scope.go:117] "RemoveContainer" containerID="5e8801f6c03277ad0f15ce8a685fd31f58e857c66d9382e667630e4deb5cc346" Mar 13 10:48:30.257759 master-0 kubenswrapper[7271]: E0313 10:48:30.257712 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-cbhxt_openshift-cluster-storage-operator(6622be09-206e-4d02-90ca-6d9f2fc852aa)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" podUID="6622be09-206e-4d02-90ca-6d9f2fc852aa" Mar 13 10:48:30.469851 master-0 kubenswrapper[7271]: I0313 10:48:30.469411 7271 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:48:30.884003 master-0 kubenswrapper[7271]: I0313 10:48:30.883909 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:30.884003 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:30.884003 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:30.884003 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:30.885280 master-0 kubenswrapper[7271]: I0313 10:48:30.884026 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:31.264023 master-0 kubenswrapper[7271]: I0313 10:48:31.263868 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/3.log" Mar 13 10:48:31.883684 master-0 kubenswrapper[7271]: I0313 10:48:31.883562 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:31.883684 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:31.883684 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:31.883684 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:31.883684 master-0 kubenswrapper[7271]: I0313 10:48:31.883679 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:32.880703 master-0 kubenswrapper[7271]: I0313 10:48:32.880640 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:48:32.883197 master-0 kubenswrapper[7271]: I0313 10:48:32.883130 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:32.883197 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:32.883197 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:32.883197 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:32.883420 master-0 kubenswrapper[7271]: I0313 10:48:32.883219 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:33.279059 master-0 kubenswrapper[7271]: I0313 10:48:33.278941 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-gsr52_070b85a0-f076-4750-aa00-dabba401dc75/cluster-baremetal-operator/1.log" Mar 13 10:48:33.279986 master-0 kubenswrapper[7271]: I0313 10:48:33.279913 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-gsr52_070b85a0-f076-4750-aa00-dabba401dc75/cluster-baremetal-operator/0.log" Mar 13 10:48:33.279986 master-0 kubenswrapper[7271]: I0313 10:48:33.279982 7271 generic.go:334] "Generic (PLEG): container finished" podID="070b85a0-f076-4750-aa00-dabba401dc75" containerID="d57d698ca8efd80b0c40df921aa32d90fdd37b423f221fd31fb6e45b5640ad03" exitCode=1 Mar 13 10:48:33.280269 master-0 kubenswrapper[7271]: I0313 10:48:33.280024 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" event={"ID":"070b85a0-f076-4750-aa00-dabba401dc75","Type":"ContainerDied","Data":"d57d698ca8efd80b0c40df921aa32d90fdd37b423f221fd31fb6e45b5640ad03"} Mar 13 10:48:33.280269 master-0 kubenswrapper[7271]: I0313 10:48:33.280073 7271 scope.go:117] "RemoveContainer" containerID="594cd9998ea936cf92d6d0f81aec77530767beeb227080ba41181e70dc234520" Mar 13 10:48:33.280809 master-0 kubenswrapper[7271]: I0313 10:48:33.280737 7271 scope.go:117] "RemoveContainer" containerID="d57d698ca8efd80b0c40df921aa32d90fdd37b423f221fd31fb6e45b5640ad03" Mar 13 10:48:33.281086 master-0 kubenswrapper[7271]: E0313 10:48:33.281035 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-gsr52_openshift-machine-api(070b85a0-f076-4750-aa00-dabba401dc75)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" podUID="070b85a0-f076-4750-aa00-dabba401dc75" Mar 13 10:48:33.883616 master-0 kubenswrapper[7271]: I0313 10:48:33.883528 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:33.883616 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:33.883616 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:33.883616 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:33.884020 master-0 kubenswrapper[7271]: I0313 10:48:33.883630 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:33.999240 master-0 kubenswrapper[7271]: E0313 10:48:33.999170 7271 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 10:48:34.289362 master-0 kubenswrapper[7271]: I0313 10:48:34.289212 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-gsr52_070b85a0-f076-4750-aa00-dabba401dc75/cluster-baremetal-operator/1.log" Mar 13 10:48:34.290633 master-0 kubenswrapper[7271]: I0313 10:48:34.290563 7271 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:48:34.290717 master-0 kubenswrapper[7271]: I0313 10:48:34.290645 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:48:34.883185 master-0 kubenswrapper[7271]: I0313 10:48:34.883106 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:34.883185 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:34.883185 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:34.883185 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:34.883532 master-0 kubenswrapper[7271]: I0313 10:48:34.883212 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:35.882938 master-0 kubenswrapper[7271]: I0313 10:48:35.882881 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:35.882938 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:35.882938 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:35.882938 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:35.883698 master-0 kubenswrapper[7271]: I0313 10:48:35.882950 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:36.883189 master-0 kubenswrapper[7271]: I0313 10:48:36.883114 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:36.883189 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:36.883189 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:36.883189 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:36.883189 master-0 kubenswrapper[7271]: I0313 10:48:36.883187 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:37.884757 master-0 kubenswrapper[7271]: I0313 10:48:37.884672 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:37.884757 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:37.884757 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:37.884757 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:37.885834 master-0 kubenswrapper[7271]: I0313 10:48:37.884772 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:38.882358 master-0 kubenswrapper[7271]: I0313 10:48:38.882302 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:38.882358 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:38.882358 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:38.882358 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:38.882678 master-0 kubenswrapper[7271]: I0313 10:48:38.882366 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:39.882844 master-0 kubenswrapper[7271]: I0313 10:48:39.882769 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:39.882844 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:39.882844 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:39.882844 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:39.883427 master-0 kubenswrapper[7271]: I0313 10:48:39.882863 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:40.469289 master-0 kubenswrapper[7271]: I0313 10:48:40.469154 7271 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:48:40.469566 master-0 kubenswrapper[7271]: I0313 10:48:40.469407 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:48:40.470707 master-0 kubenswrapper[7271]: I0313 10:48:40.470563 7271 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 10:48:40.470883 master-0 kubenswrapper[7271]: I0313 10:48:40.470783 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" gracePeriod=30 Mar 13 10:48:40.592324 master-0 kubenswrapper[7271]: E0313 10:48:40.592262 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:48:40.883321 master-0 kubenswrapper[7271]: I0313 10:48:40.883230 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:40.883321 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:40.883321 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:40.883321 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:40.883321 master-0 kubenswrapper[7271]: I0313 10:48:40.883297 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:41.346822 master-0 kubenswrapper[7271]: I0313 10:48:41.346742 7271 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" exitCode=2 Mar 13 10:48:41.347226 master-0 kubenswrapper[7271]: I0313 10:48:41.346824 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8"} Mar 13 10:48:41.347226 master-0 kubenswrapper[7271]: I0313 10:48:41.346964 7271 scope.go:117] "RemoveContainer" containerID="532b7ad139ce93c1c35843b63b26445192b618a7ded3e4c717123c6f472fec2b" Mar 13 10:48:41.348800 master-0 kubenswrapper[7271]: I0313 10:48:41.348757 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:48:41.349341 master-0 kubenswrapper[7271]: E0313 10:48:41.349286 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:48:41.379378 master-0 kubenswrapper[7271]: I0313 10:48:41.379335 7271 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 10:48:41.379602 master-0 kubenswrapper[7271]: I0313 10:48:41.379555 7271 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:48:41.379750 master-0 kubenswrapper[7271]: I0313 10:48:41.379379 7271 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 10:48:41.379854 master-0 kubenswrapper[7271]: I0313 10:48:41.379803 7271 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 10:48:41.646857 master-0 kubenswrapper[7271]: I0313 10:48:41.646539 7271 scope.go:117] "RemoveContainer" containerID="5e8801f6c03277ad0f15ce8a685fd31f58e857c66d9382e667630e4deb5cc346" Mar 13 10:48:41.647172 master-0 kubenswrapper[7271]: E0313 10:48:41.647044 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-cbhxt_openshift-cluster-storage-operator(6622be09-206e-4d02-90ca-6d9f2fc852aa)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" podUID="6622be09-206e-4d02-90ca-6d9f2fc852aa" Mar 13 10:48:41.885240 master-0 kubenswrapper[7271]: I0313 10:48:41.885114 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:41.885240 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:41.885240 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:41.885240 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:41.886546 master-0 kubenswrapper[7271]: I0313 10:48:41.885324 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:42.882847 master-0 kubenswrapper[7271]: I0313 10:48:42.882786 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:42.882847 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:42.882847 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:42.882847 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:42.883223 master-0 kubenswrapper[7271]: I0313 10:48:42.882855 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:43.883441 master-0 kubenswrapper[7271]: I0313 10:48:43.883354 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:43.883441 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:43.883441 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:43.883441 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:43.884162 master-0 kubenswrapper[7271]: I0313 10:48:43.883458 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:44.646530 master-0 kubenswrapper[7271]: I0313 10:48:44.646433 7271 scope.go:117] "RemoveContainer" containerID="ea6075437ddab13db72254693b1402fadb6322d2c4a635387e569d11ef32e573" Mar 13 10:48:44.882759 master-0 kubenswrapper[7271]: I0313 10:48:44.882708 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:44.882759 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:44.882759 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:44.882759 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:44.882759 master-0 kubenswrapper[7271]: I0313 10:48:44.882769 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:44.905407 master-0 kubenswrapper[7271]: E0313 10:48:44.905047 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 10:48:45.376138 master-0 kubenswrapper[7271]: I0313 10:48:45.376066 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/3.log" Mar 13 10:48:45.376532 master-0 kubenswrapper[7271]: I0313 10:48:45.376469 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerStarted","Data":"e612f73942ab70c1904fa8093204e01d65a250553aee680c1e4249be0f185d7a"} Mar 13 10:48:45.882925 master-0 kubenswrapper[7271]: I0313 10:48:45.882838 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:45.882925 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:45.882925 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:45.882925 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:45.883250 master-0 kubenswrapper[7271]: I0313 10:48:45.882931 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:46.646009 master-0 kubenswrapper[7271]: I0313 10:48:46.645943 7271 scope.go:117] "RemoveContainer" containerID="d57d698ca8efd80b0c40df921aa32d90fdd37b423f221fd31fb6e45b5640ad03" Mar 13 10:48:46.883380 master-0 kubenswrapper[7271]: I0313 10:48:46.883279 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:46.883380 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:46.883380 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:46.883380 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:46.883757 master-0 kubenswrapper[7271]: I0313 10:48:46.883389 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:47.215712 master-0 kubenswrapper[7271]: I0313 10:48:47.215650 7271 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:48:47.216194 master-0 kubenswrapper[7271]: I0313 10:48:47.216160 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:48:47.216413 master-0 kubenswrapper[7271]: E0313 10:48:47.216381 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:48:47.400557 master-0 kubenswrapper[7271]: I0313 10:48:47.400442 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-gsr52_070b85a0-f076-4750-aa00-dabba401dc75/cluster-baremetal-operator/1.log" Mar 13 10:48:47.401255 master-0 kubenswrapper[7271]: I0313 10:48:47.401214 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" event={"ID":"070b85a0-f076-4750-aa00-dabba401dc75","Type":"ContainerStarted","Data":"a048f0d92734d1a569ea638b4c0ba3ac71e8a176cebd6ef2053430c9120d4890"} Mar 13 10:48:47.883610 master-0 kubenswrapper[7271]: I0313 10:48:47.883491 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:47.883610 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:47.883610 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:47.883610 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:47.884844 master-0 kubenswrapper[7271]: I0313 10:48:47.883687 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:48.883250 master-0 kubenswrapper[7271]: I0313 10:48:48.883132 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:48.883250 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:48.883250 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:48.883250 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:48.884257 master-0 kubenswrapper[7271]: I0313 10:48:48.883262 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:49.885402 master-0 kubenswrapper[7271]: I0313 10:48:49.885302 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:49.885402 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:49.885402 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:49.885402 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:49.886207 master-0 kubenswrapper[7271]: I0313 10:48:49.885418 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:50.385255 master-0 kubenswrapper[7271]: I0313 10:48:50.385159 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:48:50.883114 master-0 kubenswrapper[7271]: I0313 10:48:50.883012 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:50.883114 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:50.883114 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:50.883114 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:50.883583 master-0 kubenswrapper[7271]: I0313 10:48:50.883151 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:51.884033 master-0 kubenswrapper[7271]: I0313 10:48:51.883950 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:51.884033 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:51.884033 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:51.884033 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:51.885187 master-0 kubenswrapper[7271]: I0313 10:48:51.884055 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:52.883319 master-0 kubenswrapper[7271]: I0313 10:48:52.883217 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:52.883319 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:52.883319 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:52.883319 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:52.883974 master-0 kubenswrapper[7271]: I0313 10:48:52.883333 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:53.884015 master-0 kubenswrapper[7271]: I0313 10:48:53.883938 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:53.884015 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:53.884015 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:53.884015 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:53.884960 master-0 kubenswrapper[7271]: I0313 10:48:53.884056 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:54.883662 master-0 kubenswrapper[7271]: I0313 10:48:54.883579 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:54.883662 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:54.883662 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:54.883662 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:54.883662 master-0 kubenswrapper[7271]: I0313 10:48:54.883663 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:55.645879 master-0 kubenswrapper[7271]: I0313 10:48:55.645798 7271 scope.go:117] "RemoveContainer" containerID="5e8801f6c03277ad0f15ce8a685fd31f58e857c66d9382e667630e4deb5cc346" Mar 13 10:48:55.646255 master-0 kubenswrapper[7271]: E0313 10:48:55.646159 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-cbhxt_openshift-cluster-storage-operator(6622be09-206e-4d02-90ca-6d9f2fc852aa)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" podUID="6622be09-206e-4d02-90ca-6d9f2fc852aa" Mar 13 10:48:55.883089 master-0 kubenswrapper[7271]: I0313 10:48:55.883007 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:55.883089 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:55.883089 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:55.883089 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:55.883490 master-0 kubenswrapper[7271]: I0313 10:48:55.883103 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:56.883635 master-0 kubenswrapper[7271]: I0313 10:48:56.883473 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:56.883635 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:56.883635 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:56.883635 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:56.885199 master-0 kubenswrapper[7271]: I0313 10:48:56.883632 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:57.884239 master-0 kubenswrapper[7271]: I0313 10:48:57.884125 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:57.884239 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:57.884239 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:57.884239 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:57.885546 master-0 kubenswrapper[7271]: I0313 10:48:57.884272 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:58.646410 master-0 kubenswrapper[7271]: I0313 10:48:58.646318 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:48:58.646969 master-0 kubenswrapper[7271]: E0313 10:48:58.646631 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:48:58.884032 master-0 kubenswrapper[7271]: I0313 10:48:58.883867 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:58.884032 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:58.884032 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:58.884032 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:58.884032 master-0 kubenswrapper[7271]: I0313 10:48:58.883998 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:48:59.883096 master-0 kubenswrapper[7271]: I0313 10:48:59.883022 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:48:59.883096 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:48:59.883096 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:48:59.883096 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:48:59.883096 master-0 kubenswrapper[7271]: I0313 10:48:59.883097 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:00.883767 master-0 kubenswrapper[7271]: I0313 10:49:00.883701 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:00.883767 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:00.883767 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:00.883767 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:00.884314 master-0 kubenswrapper[7271]: I0313 10:49:00.883789 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:01.883487 master-0 kubenswrapper[7271]: I0313 10:49:01.883420 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:01.883487 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:01.883487 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:01.883487 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:01.884702 master-0 kubenswrapper[7271]: I0313 10:49:01.884650 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:01.907691 master-0 kubenswrapper[7271]: E0313 10:49:01.907556 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="7s" Mar 13 10:49:02.882351 master-0 kubenswrapper[7271]: I0313 10:49:02.882303 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:02.882351 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:02.882351 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:02.882351 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:02.882735 master-0 kubenswrapper[7271]: I0313 10:49:02.882364 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:03.883406 master-0 kubenswrapper[7271]: I0313 10:49:03.883350 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:03.883406 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:03.883406 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:03.883406 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:03.884329 master-0 kubenswrapper[7271]: I0313 10:49:03.884295 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:04.884275 master-0 kubenswrapper[7271]: I0313 10:49:04.884178 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:04.884275 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:04.884275 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:04.884275 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:04.885044 master-0 kubenswrapper[7271]: I0313 10:49:04.884281 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:05.115427 master-0 kubenswrapper[7271]: I0313 10:49:05.115359 7271 status_manager.go:851] "Failed to get status for pod" podUID="1769d48d-7ef0-48ee-9b7d-b46151ae5df6" pod="openshift-etcd/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Mar 13 10:49:05.882432 master-0 kubenswrapper[7271]: I0313 10:49:05.882374 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:05.882432 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:05.882432 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:05.882432 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:05.882860 master-0 kubenswrapper[7271]: I0313 10:49:05.882833 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:06.645889 master-0 kubenswrapper[7271]: I0313 10:49:06.645837 7271 scope.go:117] "RemoveContainer" containerID="5e8801f6c03277ad0f15ce8a685fd31f58e857c66d9382e667630e4deb5cc346" Mar 13 10:49:06.646450 master-0 kubenswrapper[7271]: E0313 10:49:06.646050 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-cbhxt_openshift-cluster-storage-operator(6622be09-206e-4d02-90ca-6d9f2fc852aa)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" podUID="6622be09-206e-4d02-90ca-6d9f2fc852aa" Mar 13 10:49:06.882290 master-0 kubenswrapper[7271]: I0313 10:49:06.882231 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:06.882290 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:06.882290 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:06.882290 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:06.882597 master-0 kubenswrapper[7271]: I0313 10:49:06.882300 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:07.883842 master-0 kubenswrapper[7271]: I0313 10:49:07.883772 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:07.883842 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:07.883842 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:07.883842 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:07.884491 master-0 kubenswrapper[7271]: I0313 10:49:07.883858 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:08.293576 master-0 kubenswrapper[7271]: E0313 10:49:08.293449 7271 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 10:49:08.882433 master-0 kubenswrapper[7271]: I0313 10:49:08.882321 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:08.882433 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:08.882433 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:08.882433 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:08.882433 master-0 kubenswrapper[7271]: I0313 10:49:08.882420 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:09.646227 master-0 kubenswrapper[7271]: I0313 10:49:09.646132 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:49:09.647240 master-0 kubenswrapper[7271]: E0313 10:49:09.646409 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:49:09.883417 master-0 kubenswrapper[7271]: I0313 10:49:09.883366 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:09.883417 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:09.883417 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:09.883417 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:09.883800 master-0 kubenswrapper[7271]: I0313 10:49:09.883434 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:10.883441 master-0 kubenswrapper[7271]: I0313 10:49:10.883350 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:10.883441 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:10.883441 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:10.883441 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:10.884224 master-0 kubenswrapper[7271]: I0313 10:49:10.883448 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:11.883441 master-0 kubenswrapper[7271]: I0313 10:49:11.883352 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:11.883441 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:11.883441 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:11.883441 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:11.884191 master-0 kubenswrapper[7271]: I0313 10:49:11.883460 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:12.882959 master-0 kubenswrapper[7271]: I0313 10:49:12.882905 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:12.882959 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:12.882959 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:12.882959 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:12.883307 master-0 kubenswrapper[7271]: I0313 10:49:12.882996 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:13.882346 master-0 kubenswrapper[7271]: I0313 10:49:13.882251 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:13.882346 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:13.882346 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:13.882346 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:13.882996 master-0 kubenswrapper[7271]: I0313 10:49:13.882354 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:14.882372 master-0 kubenswrapper[7271]: I0313 10:49:14.882297 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:14.882372 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:14.882372 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:14.882372 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:14.883041 master-0 kubenswrapper[7271]: I0313 10:49:14.882385 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:15.884961 master-0 kubenswrapper[7271]: I0313 10:49:15.884907 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:15.884961 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:15.884961 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:15.884961 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:15.885721 master-0 kubenswrapper[7271]: I0313 10:49:15.884995 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:16.883568 master-0 kubenswrapper[7271]: I0313 10:49:16.883476 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:16.883568 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:16.883568 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:16.883568 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:16.883568 master-0 kubenswrapper[7271]: I0313 10:49:16.883546 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:17.884531 master-0 kubenswrapper[7271]: I0313 10:49:17.884382 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:17.884531 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:17.884531 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:17.884531 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:17.884531 master-0 kubenswrapper[7271]: I0313 10:49:17.884513 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:18.883985 master-0 kubenswrapper[7271]: I0313 10:49:18.883918 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:18.883985 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:18.883985 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:18.883985 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:18.884306 master-0 kubenswrapper[7271]: I0313 10:49:18.884009 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:18.909278 master-0 kubenswrapper[7271]: E0313 10:49:18.909159 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 10:49:19.882508 master-0 kubenswrapper[7271]: I0313 10:49:19.882407 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:19.882508 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:19.882508 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:19.882508 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:19.882508 master-0 kubenswrapper[7271]: I0313 10:49:19.882497 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:20.645408 master-0 kubenswrapper[7271]: I0313 10:49:20.645344 7271 scope.go:117] "RemoveContainer" containerID="5e8801f6c03277ad0f15ce8a685fd31f58e857c66d9382e667630e4deb5cc346" Mar 13 10:49:20.882110 master-0 kubenswrapper[7271]: I0313 10:49:20.882049 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:20.882110 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:20.882110 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:20.882110 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:20.882110 master-0 kubenswrapper[7271]: I0313 10:49:20.882105 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:21.676943 master-0 kubenswrapper[7271]: I0313 10:49:21.676875 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/3.log" Mar 13 10:49:21.676943 master-0 kubenswrapper[7271]: I0313 10:49:21.676946 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" event={"ID":"6622be09-206e-4d02-90ca-6d9f2fc852aa","Type":"ContainerStarted","Data":"2cd12182a1907561adcb78ff45833715144327cee43fb9bbf5e0048cee4d593f"} Mar 13 10:49:21.883448 master-0 kubenswrapper[7271]: I0313 10:49:21.883327 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:21.883448 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:21.883448 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:21.883448 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:21.883448 master-0 kubenswrapper[7271]: I0313 10:49:21.883390 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:22.884277 master-0 kubenswrapper[7271]: I0313 10:49:22.884138 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:22.884277 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:22.884277 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:22.884277 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:22.884277 master-0 kubenswrapper[7271]: I0313 10:49:22.884256 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:23.646207 master-0 kubenswrapper[7271]: I0313 10:49:23.646167 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:49:23.646746 master-0 kubenswrapper[7271]: E0313 10:49:23.646722 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:49:23.883380 master-0 kubenswrapper[7271]: I0313 10:49:23.883324 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:23.883380 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:23.883380 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:23.883380 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:23.883780 master-0 kubenswrapper[7271]: I0313 10:49:23.883399 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:24.883000 master-0 kubenswrapper[7271]: I0313 10:49:24.882941 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:24.883000 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:24.883000 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:24.883000 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:24.884147 master-0 kubenswrapper[7271]: I0313 10:49:24.883019 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:25.882843 master-0 kubenswrapper[7271]: I0313 10:49:25.882762 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:25.882843 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:25.882843 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:25.882843 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:25.883574 master-0 kubenswrapper[7271]: I0313 10:49:25.882859 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:26.883507 master-0 kubenswrapper[7271]: I0313 10:49:26.883187 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:26.883507 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:26.883507 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:26.883507 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:26.883507 master-0 kubenswrapper[7271]: I0313 10:49:26.883260 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:27.882448 master-0 kubenswrapper[7271]: I0313 10:49:27.882347 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:27.882448 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:27.882448 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:27.882448 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:27.882860 master-0 kubenswrapper[7271]: I0313 10:49:27.882461 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:28.882855 master-0 kubenswrapper[7271]: I0313 10:49:28.882759 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:28.882855 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:28.882855 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:28.882855 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:28.882855 master-0 kubenswrapper[7271]: I0313 10:49:28.882818 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:29.889575 master-0 kubenswrapper[7271]: I0313 10:49:29.889490 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:29.889575 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:29.889575 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:29.889575 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:29.889575 master-0 kubenswrapper[7271]: I0313 10:49:29.889573 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:30.884031 master-0 kubenswrapper[7271]: I0313 10:49:30.883899 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:30.884031 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:30.884031 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:30.884031 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:30.884031 master-0 kubenswrapper[7271]: I0313 10:49:30.883992 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:31.882990 master-0 kubenswrapper[7271]: I0313 10:49:31.882917 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:31.882990 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:31.882990 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:31.882990 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:31.884034 master-0 kubenswrapper[7271]: I0313 10:49:31.883990 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:32.884410 master-0 kubenswrapper[7271]: I0313 10:49:32.884339 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:32.884410 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:32.884410 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:32.884410 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:32.885180 master-0 kubenswrapper[7271]: I0313 10:49:32.884443 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:33.882748 master-0 kubenswrapper[7271]: I0313 10:49:33.882670 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:33.882748 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:33.882748 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:33.882748 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:33.883178 master-0 kubenswrapper[7271]: I0313 10:49:33.882782 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:34.766812 master-0 kubenswrapper[7271]: I0313 10:49:34.766744 7271 generic.go:334] "Generic (PLEG): container finished" podID="9da11462-a91d-4d02-8614-78b4c5b2f7e2" containerID="00da2a7b5527973fbd194100f44590333c80d5dcf0e49c8db3fcca2c086cc934" exitCode=0 Mar 13 10:49:34.766812 master-0 kubenswrapper[7271]: I0313 10:49:34.766799 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" event={"ID":"9da11462-a91d-4d02-8614-78b4c5b2f7e2","Type":"ContainerDied","Data":"00da2a7b5527973fbd194100f44590333c80d5dcf0e49c8db3fcca2c086cc934"} Mar 13 10:49:34.767429 master-0 kubenswrapper[7271]: I0313 10:49:34.767174 7271 scope.go:117] "RemoveContainer" containerID="00da2a7b5527973fbd194100f44590333c80d5dcf0e49c8db3fcca2c086cc934" Mar 13 10:49:34.883397 master-0 kubenswrapper[7271]: I0313 10:49:34.883332 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:34.883397 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:34.883397 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:34.883397 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:34.883851 master-0 kubenswrapper[7271]: I0313 10:49:34.883408 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:35.775908 master-0 kubenswrapper[7271]: I0313 10:49:35.775348 7271 generic.go:334] "Generic (PLEG): container finished" podID="37b2e803-302b-4650-b18f-d3d2dd703bd5" containerID="881405211eef76d473660b20a0d3c866e54acadcefe8c182ab1f5f97e108929c" exitCode=0 Mar 13 10:49:35.775908 master-0 kubenswrapper[7271]: I0313 10:49:35.775422 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" event={"ID":"37b2e803-302b-4650-b18f-d3d2dd703bd5","Type":"ContainerDied","Data":"881405211eef76d473660b20a0d3c866e54acadcefe8c182ab1f5f97e108929c"} Mar 13 10:49:35.775908 master-0 kubenswrapper[7271]: I0313 10:49:35.775464 7271 scope.go:117] "RemoveContainer" containerID="0726d914d99337ac6ae1fc3306b6380d27700c4e1ef052dd78af4add66671237" Mar 13 10:49:35.776506 master-0 kubenswrapper[7271]: I0313 10:49:35.776453 7271 scope.go:117] "RemoveContainer" containerID="881405211eef76d473660b20a0d3c866e54acadcefe8c182ab1f5f97e108929c" Mar 13 10:49:35.777532 master-0 kubenswrapper[7271]: I0313 10:49:35.777499 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" event={"ID":"9da11462-a91d-4d02-8614-78b4c5b2f7e2","Type":"ContainerStarted","Data":"f8acb3570c0f04f1102bd03e5bcff11282d55049b349ebfe69c648c7779d0a74"} Mar 13 10:49:35.882706 master-0 kubenswrapper[7271]: I0313 10:49:35.882549 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:35.882706 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:35.882706 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:35.882706 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:35.883066 master-0 kubenswrapper[7271]: I0313 10:49:35.882752 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:35.911690 master-0 kubenswrapper[7271]: E0313 10:49:35.911564 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 10:49:36.645779 master-0 kubenswrapper[7271]: I0313 10:49:36.645717 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:49:36.646158 master-0 kubenswrapper[7271]: E0313 10:49:36.645966 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:49:36.786919 master-0 kubenswrapper[7271]: I0313 10:49:36.786747 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" event={"ID":"37b2e803-302b-4650-b18f-d3d2dd703bd5","Type":"ContainerStarted","Data":"ec097f2af7a9754b59c56a257c7c6fd1c57c0f994ccd789cef43bd845640a144"} Mar 13 10:49:36.788558 master-0 kubenswrapper[7271]: I0313 10:49:36.788509 7271 generic.go:334] "Generic (PLEG): container finished" podID="26cc0e72-8b4f-4087-89b9-05d2cf6df3f6" containerID="a1bf753439496bde197d1c543409be9bfb058607cd0879d7141d07df38f38943" exitCode=0 Mar 13 10:49:36.788676 master-0 kubenswrapper[7271]: I0313 10:49:36.788561 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" event={"ID":"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6","Type":"ContainerDied","Data":"a1bf753439496bde197d1c543409be9bfb058607cd0879d7141d07df38f38943"} Mar 13 10:49:36.789260 master-0 kubenswrapper[7271]: I0313 10:49:36.789224 7271 scope.go:117] "RemoveContainer" containerID="a1bf753439496bde197d1c543409be9bfb058607cd0879d7141d07df38f38943" Mar 13 10:49:36.883005 master-0 kubenswrapper[7271]: I0313 10:49:36.882961 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:36.883005 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:36.883005 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:36.883005 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:36.883339 master-0 kubenswrapper[7271]: I0313 10:49:36.883008 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:37.796703 master-0 kubenswrapper[7271]: I0313 10:49:37.796638 7271 generic.go:334] "Generic (PLEG): container finished" podID="0ac1a605-d2d5-4004-96f5-121c20555bde" containerID="9fa1a1f3dc431f4d1989376ade490c97b3ca19baaab0c502fea959b427739c54" exitCode=0 Mar 13 10:49:37.797308 master-0 kubenswrapper[7271]: I0313 10:49:37.796736 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" event={"ID":"0ac1a605-d2d5-4004-96f5-121c20555bde","Type":"ContainerDied","Data":"9fa1a1f3dc431f4d1989376ade490c97b3ca19baaab0c502fea959b427739c54"} Mar 13 10:49:37.797308 master-0 kubenswrapper[7271]: I0313 10:49:37.797195 7271 scope.go:117] "RemoveContainer" containerID="9fa1a1f3dc431f4d1989376ade490c97b3ca19baaab0c502fea959b427739c54" Mar 13 10:49:37.800267 master-0 kubenswrapper[7271]: I0313 10:49:37.800210 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" event={"ID":"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6","Type":"ContainerStarted","Data":"848e1408fe9565a08993aaa7a95464aa09d3db6d8af6cb49cd8e2129c86c896f"} Mar 13 10:49:37.883150 master-0 kubenswrapper[7271]: I0313 10:49:37.883092 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:37.883150 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:37.883150 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:37.883150 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:37.883356 master-0 kubenswrapper[7271]: I0313 10:49:37.883182 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:38.812856 master-0 kubenswrapper[7271]: I0313 10:49:38.812747 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" event={"ID":"0ac1a605-d2d5-4004-96f5-121c20555bde","Type":"ContainerStarted","Data":"c3821e5f5b320c47587f334c6b1a8168e8afd8a4fa3a52e5c09b9493cfbd5a81"} Mar 13 10:49:38.883702 master-0 kubenswrapper[7271]: I0313 10:49:38.883571 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:38.883702 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:38.883702 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:38.883702 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:38.884422 master-0 kubenswrapper[7271]: I0313 10:49:38.883736 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:39.883075 master-0 kubenswrapper[7271]: I0313 10:49:39.882992 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:39.883075 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:39.883075 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:39.883075 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:39.883822 master-0 kubenswrapper[7271]: I0313 10:49:39.883082 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:40.882705 master-0 kubenswrapper[7271]: I0313 10:49:40.882625 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:40.882705 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:40.882705 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:40.882705 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:40.882705 master-0 kubenswrapper[7271]: I0313 10:49:40.882712 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:41.133841 master-0 kubenswrapper[7271]: I0313 10:49:41.133691 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-d787l"] Mar 13 10:49:41.140242 master-0 kubenswrapper[7271]: I0313 10:49:41.140192 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-d787l"] Mar 13 10:49:41.210610 master-0 kubenswrapper[7271]: I0313 10:49:41.210543 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 13 10:49:41.217610 master-0 kubenswrapper[7271]: I0313 10:49:41.214579 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 13 10:49:41.656150 master-0 kubenswrapper[7271]: I0313 10:49:41.656081 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95339220-324d-45e7-bdc2-e4f42fbd1d32" path="/var/lib/kubelet/pods/95339220-324d-45e7-bdc2-e4f42fbd1d32/volumes" Mar 13 10:49:41.659768 master-0 kubenswrapper[7271]: I0313 10:49:41.658999 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3bcb671-5236-49fb-8540-131f18b91fc3" path="/var/lib/kubelet/pods/b3bcb671-5236-49fb-8540-131f18b91fc3/volumes" Mar 13 10:49:41.882851 master-0 kubenswrapper[7271]: I0313 10:49:41.882757 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:41.882851 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:41.882851 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:41.882851 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:41.883141 master-0 kubenswrapper[7271]: I0313 10:49:41.882855 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:42.839938 master-0 kubenswrapper[7271]: I0313 10:49:42.839798 7271 generic.go:334] "Generic (PLEG): container finished" podID="574bf255-14b3-40af-b240-2d3abd5b86b8" containerID="5562479ec1e49b40c330a36ec4d9ac6d15b4428df0c9b17bcdf8d8cf48cf7a09" exitCode=0 Mar 13 10:49:42.839938 master-0 kubenswrapper[7271]: I0313 10:49:42.839917 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" event={"ID":"574bf255-14b3-40af-b240-2d3abd5b86b8","Type":"ContainerDied","Data":"5562479ec1e49b40c330a36ec4d9ac6d15b4428df0c9b17bcdf8d8cf48cf7a09"} Mar 13 10:49:42.841096 master-0 kubenswrapper[7271]: I0313 10:49:42.840011 7271 scope.go:117] "RemoveContainer" containerID="a384e9c9352558c7493eb0f31fbfe7c7667c323e9cd28c07e6b3e552b94e372f" Mar 13 10:49:42.841096 master-0 kubenswrapper[7271]: I0313 10:49:42.840775 7271 scope.go:117] "RemoveContainer" containerID="5562479ec1e49b40c330a36ec4d9ac6d15b4428df0c9b17bcdf8d8cf48cf7a09" Mar 13 10:49:42.883895 master-0 kubenswrapper[7271]: I0313 10:49:42.883836 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:42.883895 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:42.883895 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:42.883895 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:42.884475 master-0 kubenswrapper[7271]: I0313 10:49:42.884429 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:43.850217 master-0 kubenswrapper[7271]: I0313 10:49:43.850124 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" event={"ID":"574bf255-14b3-40af-b240-2d3abd5b86b8","Type":"ContainerStarted","Data":"955b8813c0de352f821613306592e50fe0efb766fe72a1f9c78cc7080034256f"} Mar 13 10:49:43.883134 master-0 kubenswrapper[7271]: I0313 10:49:43.883076 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:43.883134 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:43.883134 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:43.883134 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:43.883466 master-0 kubenswrapper[7271]: I0313 10:49:43.883166 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:44.858393 master-0 kubenswrapper[7271]: I0313 10:49:44.858331 7271 generic.go:334] "Generic (PLEG): container finished" podID="5ed5e77b-948b-4d94-ac9f-440ee3c07e18" containerID="7f952b61d71e907b8ab35c403ca342055b58e2b44f1c8092061e8d04df9ac501" exitCode=0 Mar 13 10:49:44.858393 master-0 kubenswrapper[7271]: I0313 10:49:44.858387 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" event={"ID":"5ed5e77b-948b-4d94-ac9f-440ee3c07e18","Type":"ContainerDied","Data":"7f952b61d71e907b8ab35c403ca342055b58e2b44f1c8092061e8d04df9ac501"} Mar 13 10:49:44.859301 master-0 kubenswrapper[7271]: I0313 10:49:44.858433 7271 scope.go:117] "RemoveContainer" containerID="dacb5471d19718622299f0fa6f9e909a820c9329353d0e6ad130c4eb61cefa28" Mar 13 10:49:44.859546 master-0 kubenswrapper[7271]: I0313 10:49:44.859505 7271 scope.go:117] "RemoveContainer" containerID="7f952b61d71e907b8ab35c403ca342055b58e2b44f1c8092061e8d04df9ac501" Mar 13 10:49:44.888886 master-0 kubenswrapper[7271]: I0313 10:49:44.888835 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:44.888886 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:44.888886 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:44.888886 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:44.889253 master-0 kubenswrapper[7271]: I0313 10:49:44.888906 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:45.867898 master-0 kubenswrapper[7271]: I0313 10:49:45.867832 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" event={"ID":"5ed5e77b-948b-4d94-ac9f-440ee3c07e18","Type":"ContainerStarted","Data":"f841fdbf3007e0b7c35076250861692dc5347b65a9d6738c6cba18bfc4e78138"} Mar 13 10:49:45.883273 master-0 kubenswrapper[7271]: I0313 10:49:45.883196 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:45.883273 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:45.883273 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:45.883273 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:45.883701 master-0 kubenswrapper[7271]: I0313 10:49:45.883289 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:46.882784 master-0 kubenswrapper[7271]: I0313 10:49:46.882678 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:46.882784 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:46.882784 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:46.882784 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:46.882784 master-0 kubenswrapper[7271]: I0313 10:49:46.882770 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:47.883417 master-0 kubenswrapper[7271]: I0313 10:49:47.883334 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:47.883417 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:47.883417 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:47.883417 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:47.883417 master-0 kubenswrapper[7271]: I0313 10:49:47.883414 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:48.646635 master-0 kubenswrapper[7271]: I0313 10:49:48.646523 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:49:48.647169 master-0 kubenswrapper[7271]: E0313 10:49:48.647100 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:49:48.884224 master-0 kubenswrapper[7271]: I0313 10:49:48.884104 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:48.884224 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:48.884224 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:48.884224 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:48.884833 master-0 kubenswrapper[7271]: I0313 10:49:48.884261 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:49.884297 master-0 kubenswrapper[7271]: I0313 10:49:49.884190 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:49.884297 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:49.884297 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:49.884297 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:49.885095 master-0 kubenswrapper[7271]: I0313 10:49:49.884309 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:50.883064 master-0 kubenswrapper[7271]: I0313 10:49:50.882991 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:50.883064 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:50.883064 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:50.883064 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:50.883575 master-0 kubenswrapper[7271]: I0313 10:49:50.883112 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:51.884450 master-0 kubenswrapper[7271]: I0313 10:49:51.884290 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:51.884450 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:51.884450 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:51.884450 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:51.884450 master-0 kubenswrapper[7271]: I0313 10:49:51.884392 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:51.917500 master-0 kubenswrapper[7271]: I0313 10:49:51.917433 7271 generic.go:334] "Generic (PLEG): container finished" podID="86ae8cb8-72b3-4be6-9feb-ee0c0da42dba" containerID="5cf7d401ea622e52729b46eea598afe245447756a5d119bc7987bfb6c5cfb794" exitCode=0 Mar 13 10:49:51.917960 master-0 kubenswrapper[7271]: I0313 10:49:51.917901 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" event={"ID":"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba","Type":"ContainerDied","Data":"5cf7d401ea622e52729b46eea598afe245447756a5d119bc7987bfb6c5cfb794"} Mar 13 10:49:51.918194 master-0 kubenswrapper[7271]: I0313 10:49:51.918161 7271 scope.go:117] "RemoveContainer" containerID="2c461d42e265a3320bcaee208db9040eedffe39900d9e8aa36490e00a5c604c0" Mar 13 10:49:51.919442 master-0 kubenswrapper[7271]: I0313 10:49:51.919377 7271 scope.go:117] "RemoveContainer" containerID="5cf7d401ea622e52729b46eea598afe245447756a5d119bc7987bfb6c5cfb794" Mar 13 10:49:52.882444 master-0 kubenswrapper[7271]: I0313 10:49:52.882380 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:52.882444 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:52.882444 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:52.882444 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:52.882444 master-0 kubenswrapper[7271]: I0313 10:49:52.882439 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:52.913938 master-0 kubenswrapper[7271]: E0313 10:49:52.913542 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 10:49:52.927472 master-0 kubenswrapper[7271]: I0313 10:49:52.927422 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" event={"ID":"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba","Type":"ContainerStarted","Data":"6830ae5b2479d06e8f24009a6cc57457cfac0ea66c4803341d533fc0edd52e38"} Mar 13 10:49:53.883373 master-0 kubenswrapper[7271]: I0313 10:49:53.883301 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:53.883373 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:53.883373 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:53.883373 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:53.883692 master-0 kubenswrapper[7271]: I0313 10:49:53.883376 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:54.883210 master-0 kubenswrapper[7271]: I0313 10:49:54.883134 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:54.883210 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:54.883210 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:54.883210 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:54.883210 master-0 kubenswrapper[7271]: I0313 10:49:54.883212 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:55.882487 master-0 kubenswrapper[7271]: I0313 10:49:55.882432 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:55.882487 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:55.882487 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:55.882487 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:55.882487 master-0 kubenswrapper[7271]: I0313 10:49:55.882485 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:55.950864 master-0 kubenswrapper[7271]: I0313 10:49:55.950731 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-6vpl4_1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/network-operator/0.log" Mar 13 10:49:55.950864 master-0 kubenswrapper[7271]: I0313 10:49:55.950794 7271 generic.go:334] "Generic (PLEG): container finished" podID="1d5f5440-b10c-40ea-9f1a-5f03babc1bd9" containerID="4d75e74c4df786ae928889ac54113d7b673c3ebf79a2a08a34f9fbe9b63c1453" exitCode=0 Mar 13 10:49:55.950864 master-0 kubenswrapper[7271]: I0313 10:49:55.950825 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" event={"ID":"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9","Type":"ContainerDied","Data":"4d75e74c4df786ae928889ac54113d7b673c3ebf79a2a08a34f9fbe9b63c1453"} Mar 13 10:49:55.950864 master-0 kubenswrapper[7271]: I0313 10:49:55.950860 7271 scope.go:117] "RemoveContainer" containerID="5e2eaafddd132326dc9e3d7a39739553509b59eb3a4133fcdb22787eb5fde49c" Mar 13 10:49:55.951711 master-0 kubenswrapper[7271]: I0313 10:49:55.951548 7271 scope.go:117] "RemoveContainer" containerID="4d75e74c4df786ae928889ac54113d7b673c3ebf79a2a08a34f9fbe9b63c1453" Mar 13 10:49:56.883484 master-0 kubenswrapper[7271]: I0313 10:49:56.883381 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:56.883484 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:56.883484 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:56.883484 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:56.883484 master-0 kubenswrapper[7271]: I0313 10:49:56.883462 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:56.961292 master-0 kubenswrapper[7271]: I0313 10:49:56.961226 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" event={"ID":"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9","Type":"ContainerStarted","Data":"87d19e0bcae70620b5063201bc148f0a8dd9c553b9cd2acc3f50a646c9d3752e"} Mar 13 10:49:57.883758 master-0 kubenswrapper[7271]: I0313 10:49:57.883684 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:57.883758 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:57.883758 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:57.883758 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:57.883758 master-0 kubenswrapper[7271]: I0313 10:49:57.883752 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:58.882888 master-0 kubenswrapper[7271]: I0313 10:49:58.882827 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:58.882888 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:58.882888 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:58.882888 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:58.883504 master-0 kubenswrapper[7271]: I0313 10:49:58.882893 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:59.882853 master-0 kubenswrapper[7271]: I0313 10:49:59.882798 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:49:59.882853 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:49:59.882853 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:49:59.882853 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:49:59.883520 master-0 kubenswrapper[7271]: I0313 10:49:59.882886 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:49:59.982273 master-0 kubenswrapper[7271]: I0313 10:49:59.981838 7271 generic.go:334] "Generic (PLEG): container finished" podID="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" containerID="cd940301b6045fcf3388088b051ec834a3261f017e1dcca1b8063296e4c0a2f1" exitCode=0 Mar 13 10:49:59.982273 master-0 kubenswrapper[7271]: I0313 10:49:59.981907 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" event={"ID":"1434c4a2-5c4d-478a-a16a-7d6a52ea3099","Type":"ContainerDied","Data":"cd940301b6045fcf3388088b051ec834a3261f017e1dcca1b8063296e4c0a2f1"} Mar 13 10:49:59.982273 master-0 kubenswrapper[7271]: I0313 10:49:59.981953 7271 scope.go:117] "RemoveContainer" containerID="07efb32e685572e6b4d6844e3569402a8bdfbf11ae0829c85acd5de7788ca4d9" Mar 13 10:49:59.982723 master-0 kubenswrapper[7271]: I0313 10:49:59.982694 7271 scope.go:117] "RemoveContainer" containerID="cd940301b6045fcf3388088b051ec834a3261f017e1dcca1b8063296e4c0a2f1" Mar 13 10:50:00.883919 master-0 kubenswrapper[7271]: I0313 10:50:00.883854 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:00.883919 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:00.883919 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:00.883919 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:00.884763 master-0 kubenswrapper[7271]: I0313 10:50:00.883934 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:00.988939 master-0 kubenswrapper[7271]: I0313 10:50:00.988852 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" event={"ID":"1434c4a2-5c4d-478a-a16a-7d6a52ea3099","Type":"ContainerStarted","Data":"73707f727e0d168aab4f51e1bd7f37ba50e5aa558dee3a024611190790fa55a6"} Mar 13 10:50:01.647703 master-0 kubenswrapper[7271]: I0313 10:50:01.647532 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:50:01.648433 master-0 kubenswrapper[7271]: E0313 10:50:01.647922 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:50:01.883181 master-0 kubenswrapper[7271]: I0313 10:50:01.883115 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:01.883181 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:01.883181 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:01.883181 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:01.883817 master-0 kubenswrapper[7271]: I0313 10:50:01.883772 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:02.883011 master-0 kubenswrapper[7271]: I0313 10:50:02.882871 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:02.883011 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:02.883011 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:02.883011 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:02.883011 master-0 kubenswrapper[7271]: I0313 10:50:02.882960 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:03.009874 master-0 kubenswrapper[7271]: I0313 10:50:03.009767 7271 generic.go:334] "Generic (PLEG): container finished" podID="8f9db15a-8854-485b-9863-9cbe5dddd977" containerID="3d7f37aa994251928291249049a2be620c22f26b28c64911444e794ad1a679e5" exitCode=0 Mar 13 10:50:03.009874 master-0 kubenswrapper[7271]: I0313 10:50:03.009850 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" event={"ID":"8f9db15a-8854-485b-9863-9cbe5dddd977","Type":"ContainerDied","Data":"3d7f37aa994251928291249049a2be620c22f26b28c64911444e794ad1a679e5"} Mar 13 10:50:03.010550 master-0 kubenswrapper[7271]: I0313 10:50:03.009912 7271 scope.go:117] "RemoveContainer" containerID="30ed7322c0091d1c760c898b8eeff7c2a46e380aac09f0741b2738a7131c9763" Mar 13 10:50:03.011460 master-0 kubenswrapper[7271]: I0313 10:50:03.010872 7271 scope.go:117] "RemoveContainer" containerID="3d7f37aa994251928291249049a2be620c22f26b28c64911444e794ad1a679e5" Mar 13 10:50:03.013887 master-0 kubenswrapper[7271]: I0313 10:50:03.013839 7271 generic.go:334] "Generic (PLEG): container finished" podID="c09f42db-e6d7-469d-9761-88a879f6aa6b" containerID="a8df02bb41a45b57cf8e71e70880ad2fbf324a4b46f7ee5205697f332f790983" exitCode=0 Mar 13 10:50:03.014012 master-0 kubenswrapper[7271]: I0313 10:50:03.013878 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" event={"ID":"c09f42db-e6d7-469d-9761-88a879f6aa6b","Type":"ContainerDied","Data":"a8df02bb41a45b57cf8e71e70880ad2fbf324a4b46f7ee5205697f332f790983"} Mar 13 10:50:03.015090 master-0 kubenswrapper[7271]: I0313 10:50:03.015019 7271 scope.go:117] "RemoveContainer" containerID="a8df02bb41a45b57cf8e71e70880ad2fbf324a4b46f7ee5205697f332f790983" Mar 13 10:50:03.017397 master-0 kubenswrapper[7271]: I0313 10:50:03.017340 7271 generic.go:334] "Generic (PLEG): container finished" podID="a1a998af-4fc0-4078-a6a0-93dde6c00508" containerID="b2d3650b18e8d4e9f38822804153cd7a45f1b0959bcb61f0ce6a90a1570211e0" exitCode=0 Mar 13 10:50:03.017678 master-0 kubenswrapper[7271]: I0313 10:50:03.017405 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" event={"ID":"a1a998af-4fc0-4078-a6a0-93dde6c00508","Type":"ContainerDied","Data":"b2d3650b18e8d4e9f38822804153cd7a45f1b0959bcb61f0ce6a90a1570211e0"} Mar 13 10:50:03.018379 master-0 kubenswrapper[7271]: I0313 10:50:03.018062 7271 scope.go:117] "RemoveContainer" containerID="b2d3650b18e8d4e9f38822804153cd7a45f1b0959bcb61f0ce6a90a1570211e0" Mar 13 10:50:03.026934 master-0 kubenswrapper[7271]: I0313 10:50:03.021433 7271 generic.go:334] "Generic (PLEG): container finished" podID="d0f42a72-24c7-49e6-8edb-97b2b0d6183a" containerID="d13596a56d4b7303ec265a6d08c85fbe9795571675ab43829e0e95ae8ae9fbbf" exitCode=0 Mar 13 10:50:03.026934 master-0 kubenswrapper[7271]: I0313 10:50:03.021507 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" event={"ID":"d0f42a72-24c7-49e6-8edb-97b2b0d6183a","Type":"ContainerDied","Data":"d13596a56d4b7303ec265a6d08c85fbe9795571675ab43829e0e95ae8ae9fbbf"} Mar 13 10:50:03.026934 master-0 kubenswrapper[7271]: I0313 10:50:03.022668 7271 scope.go:117] "RemoveContainer" containerID="d13596a56d4b7303ec265a6d08c85fbe9795571675ab43829e0e95ae8ae9fbbf" Mar 13 10:50:03.026934 master-0 kubenswrapper[7271]: I0313 10:50:03.023923 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-nsg74_282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/openshift-controller-manager-operator/1.log" Mar 13 10:50:03.026934 master-0 kubenswrapper[7271]: I0313 10:50:03.023992 7271 generic.go:334] "Generic (PLEG): container finished" podID="282bc9ff-1bc0-421b-9cd3-d88d7c5e5303" containerID="44045eb34dbce8a8d8c5bec28be559a0d562acea9909308b142b2b5b5860a229" exitCode=0 Mar 13 10:50:03.026934 master-0 kubenswrapper[7271]: I0313 10:50:03.024065 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" event={"ID":"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303","Type":"ContainerDied","Data":"44045eb34dbce8a8d8c5bec28be559a0d562acea9909308b142b2b5b5860a229"} Mar 13 10:50:03.026934 master-0 kubenswrapper[7271]: I0313 10:50:03.024381 7271 scope.go:117] "RemoveContainer" containerID="44045eb34dbce8a8d8c5bec28be559a0d562acea9909308b142b2b5b5860a229" Mar 13 10:50:03.028181 master-0 kubenswrapper[7271]: I0313 10:50:03.028109 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-d5b45_8a305f45-8689-45a8-8c8b-5954f2c863df/package-server-manager/0.log" Mar 13 10:50:03.029409 master-0 kubenswrapper[7271]: I0313 10:50:03.029345 7271 generic.go:334] "Generic (PLEG): container finished" podID="8a305f45-8689-45a8-8c8b-5954f2c863df" containerID="89274f7911bc25e38977ddb45d006b7195ff00ecbb96f23c5359ae00a584f176" exitCode=1 Mar 13 10:50:03.029409 master-0 kubenswrapper[7271]: I0313 10:50:03.029385 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" event={"ID":"8a305f45-8689-45a8-8c8b-5954f2c863df","Type":"ContainerDied","Data":"89274f7911bc25e38977ddb45d006b7195ff00ecbb96f23c5359ae00a584f176"} Mar 13 10:50:03.030353 master-0 kubenswrapper[7271]: I0313 10:50:03.030159 7271 scope.go:117] "RemoveContainer" containerID="89274f7911bc25e38977ddb45d006b7195ff00ecbb96f23c5359ae00a584f176" Mar 13 10:50:03.032246 master-0 kubenswrapper[7271]: I0313 10:50:03.032176 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-7h8nz_48f99840-4d9e-49c5-819e-0bb15493feb5/machine-api-operator/0.log" Mar 13 10:50:03.032910 master-0 kubenswrapper[7271]: I0313 10:50:03.032844 7271 generic.go:334] "Generic (PLEG): container finished" podID="48f99840-4d9e-49c5-819e-0bb15493feb5" containerID="3db54e90276a64402967c0bc59c00901e01327339bb78dd658883ac9c02f925f" exitCode=255 Mar 13 10:50:03.033527 master-0 kubenswrapper[7271]: I0313 10:50:03.032943 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" event={"ID":"48f99840-4d9e-49c5-819e-0bb15493feb5","Type":"ContainerDied","Data":"3db54e90276a64402967c0bc59c00901e01327339bb78dd658883ac9c02f925f"} Mar 13 10:50:03.037348 master-0 kubenswrapper[7271]: I0313 10:50:03.035536 7271 scope.go:117] "RemoveContainer" containerID="3db54e90276a64402967c0bc59c00901e01327339bb78dd658883ac9c02f925f" Mar 13 10:50:03.038664 master-0 kubenswrapper[7271]: I0313 10:50:03.038570 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-pzjxd_d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33/cluster-autoscaler-operator/0.log" Mar 13 10:50:03.040175 master-0 kubenswrapper[7271]: I0313 10:50:03.039754 7271 generic.go:334] "Generic (PLEG): container finished" podID="d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33" containerID="33f485f0f2a1052d43c6456fe1c55f48c0eae8c08bc7615626d7dbf11fd3c26a" exitCode=255 Mar 13 10:50:03.040175 master-0 kubenswrapper[7271]: I0313 10:50:03.039995 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" event={"ID":"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33","Type":"ContainerDied","Data":"33f485f0f2a1052d43c6456fe1c55f48c0eae8c08bc7615626d7dbf11fd3c26a"} Mar 13 10:50:03.040531 master-0 kubenswrapper[7271]: I0313 10:50:03.040466 7271 scope.go:117] "RemoveContainer" containerID="33f485f0f2a1052d43c6456fe1c55f48c0eae8c08bc7615626d7dbf11fd3c26a" Mar 13 10:50:03.043355 master-0 kubenswrapper[7271]: I0313 10:50:03.043280 7271 generic.go:334] "Generic (PLEG): container finished" podID="ec3168fc-6c8f-4603-94e0-17b1ae22a802" containerID="294850f202234f4a9d138e028654f94bb9813203f7edf3397d10697e7a4b46a2" exitCode=0 Mar 13 10:50:03.043491 master-0 kubenswrapper[7271]: I0313 10:50:03.043394 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" event={"ID":"ec3168fc-6c8f-4603-94e0-17b1ae22a802","Type":"ContainerDied","Data":"294850f202234f4a9d138e028654f94bb9813203f7edf3397d10697e7a4b46a2"} Mar 13 10:50:03.044630 master-0 kubenswrapper[7271]: I0313 10:50:03.044531 7271 scope.go:117] "RemoveContainer" containerID="294850f202234f4a9d138e028654f94bb9813203f7edf3397d10697e7a4b46a2" Mar 13 10:50:03.049745 master-0 kubenswrapper[7271]: I0313 10:50:03.049676 7271 generic.go:334] "Generic (PLEG): container finished" podID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerID="080bec4d72d5bc2a5ff39e071b40e2b30bc6c479f34acbf3881af3489f75aaae" exitCode=0 Mar 13 10:50:03.050809 master-0 kubenswrapper[7271]: I0313 10:50:03.049754 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" event={"ID":"6ed47c57-533f-43e4-88eb-07da29b4878f","Type":"ContainerDied","Data":"080bec4d72d5bc2a5ff39e071b40e2b30bc6c479f34acbf3881af3489f75aaae"} Mar 13 10:50:03.050809 master-0 kubenswrapper[7271]: I0313 10:50:03.050198 7271 scope.go:117] "RemoveContainer" containerID="080bec4d72d5bc2a5ff39e071b40e2b30bc6c479f34acbf3881af3489f75aaae" Mar 13 10:50:03.053100 master-0 kubenswrapper[7271]: I0313 10:50:03.051671 7271 generic.go:334] "Generic (PLEG): container finished" podID="549bd192-0235-4994-b485-f1b10d16f6b5" containerID="271da4cc5b20956051ed1d7f97405260dffc34901d137d8e75b3c407349229eb" exitCode=0 Mar 13 10:50:03.053100 master-0 kubenswrapper[7271]: I0313 10:50:03.051730 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" event={"ID":"549bd192-0235-4994-b485-f1b10d16f6b5","Type":"ContainerDied","Data":"271da4cc5b20956051ed1d7f97405260dffc34901d137d8e75b3c407349229eb"} Mar 13 10:50:03.053100 master-0 kubenswrapper[7271]: I0313 10:50:03.052568 7271 scope.go:117] "RemoveContainer" containerID="271da4cc5b20956051ed1d7f97405260dffc34901d137d8e75b3c407349229eb" Mar 13 10:50:03.054676 master-0 kubenswrapper[7271]: I0313 10:50:03.053838 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-9fptc_42b4d53c-af72-44c8-9605-271445f95f87/cluster-node-tuning-operator/0.log" Mar 13 10:50:03.054676 master-0 kubenswrapper[7271]: I0313 10:50:03.053875 7271 generic.go:334] "Generic (PLEG): container finished" podID="42b4d53c-af72-44c8-9605-271445f95f87" containerID="4898ddf0b80011b0f9f0a24077d87c24f74962cf228e87be2367d09c896182b1" exitCode=1 Mar 13 10:50:03.054676 master-0 kubenswrapper[7271]: I0313 10:50:03.053924 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" event={"ID":"42b4d53c-af72-44c8-9605-271445f95f87","Type":"ContainerDied","Data":"4898ddf0b80011b0f9f0a24077d87c24f74962cf228e87be2367d09c896182b1"} Mar 13 10:50:03.056835 master-0 kubenswrapper[7271]: I0313 10:50:03.054953 7271 scope.go:117] "RemoveContainer" containerID="dbff0a4ca77dfd3c5dce218a106dba837080cd80ee7f274b5ebceb8f682ccabd" Mar 13 10:50:03.056835 master-0 kubenswrapper[7271]: I0313 10:50:03.055206 7271 scope.go:117] "RemoveContainer" containerID="4898ddf0b80011b0f9f0a24077d87c24f74962cf228e87be2367d09c896182b1" Mar 13 10:50:03.069977 master-0 kubenswrapper[7271]: I0313 10:50:03.069921 7271 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="e9000808717ea9c0e3216e703e0ba1564b42f55e959843c60a49ae0e4eb9a8e7" exitCode=0 Mar 13 10:50:03.070209 master-0 kubenswrapper[7271]: I0313 10:50:03.070016 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"e9000808717ea9c0e3216e703e0ba1564b42f55e959843c60a49ae0e4eb9a8e7"} Mar 13 10:50:03.072025 master-0 kubenswrapper[7271]: I0313 10:50:03.070786 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:50:03.072025 master-0 kubenswrapper[7271]: I0313 10:50:03.070816 7271 scope.go:117] "RemoveContainer" containerID="e9000808717ea9c0e3216e703e0ba1564b42f55e959843c60a49ae0e4eb9a8e7" Mar 13 10:50:03.078895 master-0 kubenswrapper[7271]: I0313 10:50:03.078817 7271 generic.go:334] "Generic (PLEG): container finished" podID="866cf034-8fd8-4f16-8e9b-68627228aa8d" containerID="838b4cfccf523638ccd0bf31bf9b16492b12c33b0f070423ea23f66b9d72c78e" exitCode=0 Mar 13 10:50:03.079006 master-0 kubenswrapper[7271]: I0313 10:50:03.078887 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx" event={"ID":"866cf034-8fd8-4f16-8e9b-68627228aa8d","Type":"ContainerDied","Data":"838b4cfccf523638ccd0bf31bf9b16492b12c33b0f070423ea23f66b9d72c78e"} Mar 13 10:50:03.081350 master-0 kubenswrapper[7271]: I0313 10:50:03.081305 7271 generic.go:334] "Generic (PLEG): container finished" podID="8cf9326b-bc23-45c2-82c4-9c08c739ac5a" containerID="43230423fe1ad4b520548b08f0898f9f7d5cb849ac1cf6fadabab03cda0d4f3c" exitCode=0 Mar 13 10:50:03.081449 master-0 kubenswrapper[7271]: I0313 10:50:03.081395 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" event={"ID":"8cf9326b-bc23-45c2-82c4-9c08c739ac5a","Type":"ContainerDied","Data":"43230423fe1ad4b520548b08f0898f9f7d5cb849ac1cf6fadabab03cda0d4f3c"} Mar 13 10:50:03.082148 master-0 kubenswrapper[7271]: I0313 10:50:03.081989 7271 scope.go:117] "RemoveContainer" containerID="838b4cfccf523638ccd0bf31bf9b16492b12c33b0f070423ea23f66b9d72c78e" Mar 13 10:50:03.082148 master-0 kubenswrapper[7271]: I0313 10:50:03.082125 7271 scope.go:117] "RemoveContainer" containerID="43230423fe1ad4b520548b08f0898f9f7d5cb849ac1cf6fadabab03cda0d4f3c" Mar 13 10:50:03.101820 master-0 kubenswrapper[7271]: I0313 10:50:03.093812 7271 generic.go:334] "Generic (PLEG): container finished" podID="b8d40b37-0f3d-4531-9fa8-eda965d2337d" containerID="928f705a6df1a237b298e2f772354a8814379ea930e2d466bbe222c0fc185584" exitCode=0 Mar 13 10:50:03.101820 master-0 kubenswrapper[7271]: I0313 10:50:03.093860 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" event={"ID":"b8d40b37-0f3d-4531-9fa8-eda965d2337d","Type":"ContainerDied","Data":"928f705a6df1a237b298e2f772354a8814379ea930e2d466bbe222c0fc185584"} Mar 13 10:50:03.101820 master-0 kubenswrapper[7271]: I0313 10:50:03.095432 7271 scope.go:117] "RemoveContainer" containerID="928f705a6df1a237b298e2f772354a8814379ea930e2d466bbe222c0fc185584" Mar 13 10:50:03.147078 master-0 kubenswrapper[7271]: I0313 10:50:03.146959 7271 scope.go:117] "RemoveContainer" containerID="53a8fd339624b3a824ba77b1d93455581709099722d103fd93b0ffb255eebf03" Mar 13 10:50:03.303602 master-0 kubenswrapper[7271]: I0313 10:50:03.303537 7271 scope.go:117] "RemoveContainer" containerID="1920e0c05ffebe7a0fab80b000aebd0c99a9626ca78c9c2b099c218c0c998378" Mar 13 10:50:03.433459 master-0 kubenswrapper[7271]: I0313 10:50:03.433424 7271 scope.go:117] "RemoveContainer" containerID="f5cc508c8bba11aea5ee45f0185ba6b283bf13e245305fcd3727611ac4aa5998" Mar 13 10:50:03.479261 master-0 kubenswrapper[7271]: I0313 10:50:03.479224 7271 scope.go:117] "RemoveContainer" containerID="a242486632cda89db044ed9feff7bb156e404c15924daa0514297e6cfa246629" Mar 13 10:50:03.673551 master-0 kubenswrapper[7271]: I0313 10:50:03.673504 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:50:03.673874 master-0 kubenswrapper[7271]: I0313 10:50:03.673600 7271 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:50:03.868517 master-0 kubenswrapper[7271]: E0313 10:50:03.868467 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:50:03.886802 master-0 kubenswrapper[7271]: I0313 10:50:03.886756 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:03.886802 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:03.886802 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:03.886802 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:03.887230 master-0 kubenswrapper[7271]: I0313 10:50:03.886819 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:04.102509 master-0 kubenswrapper[7271]: I0313 10:50:04.102462 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx" event={"ID":"866cf034-8fd8-4f16-8e9b-68627228aa8d","Type":"ContainerStarted","Data":"08e72df7bb1bd924beb52a09704c0f7626129e436b213b9bba5223f5b92497d4"} Mar 13 10:50:04.104660 master-0 kubenswrapper[7271]: I0313 10:50:04.104623 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" event={"ID":"d0f42a72-24c7-49e6-8edb-97b2b0d6183a","Type":"ContainerStarted","Data":"199ef1725fc0b34fafcaa9a2e7ff80ecc21be24d314bc723ad46376b3948c3ed"} Mar 13 10:50:04.106229 master-0 kubenswrapper[7271]: I0313 10:50:04.106205 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" event={"ID":"a1a998af-4fc0-4078-a6a0-93dde6c00508","Type":"ContainerStarted","Data":"6f0b5735c47f582085790eff33ad478ee7bc4f88084dce6f411e8a6099690944"} Mar 13 10:50:04.108174 master-0 kubenswrapper[7271]: I0313 10:50:04.108141 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" event={"ID":"8f9db15a-8854-485b-9863-9cbe5dddd977","Type":"ContainerStarted","Data":"e45e06b6e23f597c7d9ebc6e83c40cdff94961274af64b810698c2479a86960e"} Mar 13 10:50:04.110219 master-0 kubenswrapper[7271]: I0313 10:50:04.110149 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" event={"ID":"ec3168fc-6c8f-4603-94e0-17b1ae22a802","Type":"ContainerStarted","Data":"20f69befa7987e3711c7987fc07373d868a17be3a8563747321e56031d3eae68"} Mar 13 10:50:04.116363 master-0 kubenswrapper[7271]: I0313 10:50:04.116323 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" event={"ID":"549bd192-0235-4994-b485-f1b10d16f6b5","Type":"ContainerStarted","Data":"4d601df5d9a62c0b261f9843505b851d1294a641475a66cde6319397af1bdfc1"} Mar 13 10:50:04.125481 master-0 kubenswrapper[7271]: I0313 10:50:04.125437 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-d5b45_8a305f45-8689-45a8-8c8b-5954f2c863df/package-server-manager/0.log" Mar 13 10:50:04.126301 master-0 kubenswrapper[7271]: I0313 10:50:04.126273 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" event={"ID":"8a305f45-8689-45a8-8c8b-5954f2c863df","Type":"ContainerStarted","Data":"49ce98305d477a81aa0f86296e7880b6b9c7c848ff7fbc9d17667f3f368708d4"} Mar 13 10:50:04.126991 master-0 kubenswrapper[7271]: I0313 10:50:04.126968 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:50:04.132709 master-0 kubenswrapper[7271]: I0313 10:50:04.132681 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-pzjxd_d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33/cluster-autoscaler-operator/0.log" Mar 13 10:50:04.136890 master-0 kubenswrapper[7271]: I0313 10:50:04.136851 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" event={"ID":"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33","Type":"ContainerStarted","Data":"9fa4e6fb1d30b7b5a30ea9e9b692f33b122a6b2c26a2402cae114a62e10fce8b"} Mar 13 10:50:04.151196 master-0 kubenswrapper[7271]: I0313 10:50:04.151165 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-7h8nz_48f99840-4d9e-49c5-819e-0bb15493feb5/machine-api-operator/0.log" Mar 13 10:50:04.152255 master-0 kubenswrapper[7271]: I0313 10:50:04.152203 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" event={"ID":"48f99840-4d9e-49c5-819e-0bb15493feb5","Type":"ContainerStarted","Data":"e259cc8c4ca38a03e508e257fc482331c9b3adc55176b541dbebfb8b59682c93"} Mar 13 10:50:04.159198 master-0 kubenswrapper[7271]: I0313 10:50:04.159159 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"d50222c619a1beb462f2ff2c50918ed3814098cfb9ee8c852270a8c209a51384"} Mar 13 10:50:04.159767 master-0 kubenswrapper[7271]: I0313 10:50:04.159719 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:50:04.159950 master-0 kubenswrapper[7271]: E0313 10:50:04.159927 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:50:04.165708 master-0 kubenswrapper[7271]: I0313 10:50:04.165648 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" event={"ID":"c09f42db-e6d7-469d-9761-88a879f6aa6b","Type":"ContainerStarted","Data":"dfbaca94957b1c7c9e9fa4fa8737e0b8700b98537257590b3b241ae68fa32dd8"} Mar 13 10:50:04.166720 master-0 kubenswrapper[7271]: I0313 10:50:04.166677 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:50:04.173058 master-0 kubenswrapper[7271]: I0313 10:50:04.173015 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" event={"ID":"6ed47c57-533f-43e4-88eb-07da29b4878f","Type":"ContainerStarted","Data":"d47ce6040575b83aadd7a6c04cfe142e80c559de58f1e1161c270bccc8087405"} Mar 13 10:50:04.173714 master-0 kubenswrapper[7271]: I0313 10:50:04.173691 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:50:04.178180 master-0 kubenswrapper[7271]: I0313 10:50:04.178129 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" event={"ID":"8cf9326b-bc23-45c2-82c4-9c08c739ac5a","Type":"ContainerStarted","Data":"ed4df84c1fa5d5a3d1191e7946311f08fab0ad80f58eb63c4458b9c27edef01d"} Mar 13 10:50:04.192868 master-0 kubenswrapper[7271]: I0313 10:50:04.192827 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" event={"ID":"b8d40b37-0f3d-4531-9fa8-eda965d2337d","Type":"ContainerStarted","Data":"70a82868e3697e28af22322fcb7b113d13acd78acc7b866ea12dd432ee9d635b"} Mar 13 10:50:04.196171 master-0 kubenswrapper[7271]: I0313 10:50:04.196106 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-9fptc_42b4d53c-af72-44c8-9605-271445f95f87/cluster-node-tuning-operator/0.log" Mar 13 10:50:04.196255 master-0 kubenswrapper[7271]: I0313 10:50:04.196204 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" event={"ID":"42b4d53c-af72-44c8-9605-271445f95f87","Type":"ContainerStarted","Data":"02e68f2f5f014a8890b45a4fea2b9e7367665a1a630c6caeb0162a202cb213ba"} Mar 13 10:50:04.210889 master-0 kubenswrapper[7271]: I0313 10:50:04.210825 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" event={"ID":"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303","Type":"ContainerStarted","Data":"7a9e675b65767654e06fb16237d85ae94dea762e6a46ff359364aec0091bae7d"} Mar 13 10:50:04.229351 master-0 kubenswrapper[7271]: I0313 10:50:04.229305 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:50:04.882677 master-0 kubenswrapper[7271]: I0313 10:50:04.882631 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:04.882677 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:04.882677 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:04.882677 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:04.882981 master-0 kubenswrapper[7271]: I0313 10:50:04.882689 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:05.882500 master-0 kubenswrapper[7271]: I0313 10:50:05.882412 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:05.882500 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:05.882500 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:05.882500 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:05.882500 master-0 kubenswrapper[7271]: I0313 10:50:05.882467 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:06.278560 master-0 kubenswrapper[7271]: I0313 10:50:06.278463 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:50:06.279519 master-0 kubenswrapper[7271]: I0313 10:50:06.279479 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:50:06.279882 master-0 kubenswrapper[7271]: E0313 10:50:06.279839 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:50:06.882826 master-0 kubenswrapper[7271]: I0313 10:50:06.882743 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:06.882826 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:06.882826 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:06.882826 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:06.882826 master-0 kubenswrapper[7271]: I0313 10:50:06.882820 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:07.446044 master-0 kubenswrapper[7271]: I0313 10:50:07.445978 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:50:07.883919 master-0 kubenswrapper[7271]: I0313 10:50:07.883834 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:07.883919 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:07.883919 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:07.883919 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:07.883919 master-0 kubenswrapper[7271]: I0313 10:50:07.883916 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:08.883979 master-0 kubenswrapper[7271]: I0313 10:50:08.883921 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:08.883979 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:08.883979 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:08.883979 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:08.883979 master-0 kubenswrapper[7271]: I0313 10:50:08.883981 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:09.143076 master-0 kubenswrapper[7271]: I0313 10:50:09.142881 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:50:09.144024 master-0 kubenswrapper[7271]: I0313 10:50:09.143961 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:50:09.144502 master-0 kubenswrapper[7271]: E0313 10:50:09.144438 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:50:09.147345 master-0 kubenswrapper[7271]: I0313 10:50:09.147276 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:50:09.250408 master-0 kubenswrapper[7271]: I0313 10:50:09.250336 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:50:09.250684 master-0 kubenswrapper[7271]: E0313 10:50:09.250550 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:50:09.882786 master-0 kubenswrapper[7271]: I0313 10:50:09.882719 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:09.882786 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:09.882786 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:09.882786 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:09.882786 master-0 kubenswrapper[7271]: I0313 10:50:09.882778 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:10.883430 master-0 kubenswrapper[7271]: I0313 10:50:10.883337 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:10.883430 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:10.883430 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:10.883430 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:10.884501 master-0 kubenswrapper[7271]: I0313 10:50:10.883458 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:11.884251 master-0 kubenswrapper[7271]: I0313 10:50:11.884068 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:11.884251 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:11.884251 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:11.884251 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:11.884251 master-0 kubenswrapper[7271]: I0313 10:50:11.884170 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:12.645761 master-0 kubenswrapper[7271]: I0313 10:50:12.645661 7271 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:50:12.645761 master-0 kubenswrapper[7271]: I0313 10:50:12.645746 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:50:12.663551 master-0 kubenswrapper[7271]: I0313 10:50:12.663490 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 10:50:12.664852 master-0 kubenswrapper[7271]: I0313 10:50:12.664821 7271 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 13 10:50:12.671469 master-0 kubenswrapper[7271]: I0313 10:50:12.671417 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 10:50:12.685090 master-0 kubenswrapper[7271]: I0313 10:50:12.685012 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 10:50:12.883796 master-0 kubenswrapper[7271]: I0313 10:50:12.883684 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:12.883796 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:12.883796 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:12.883796 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:12.884299 master-0 kubenswrapper[7271]: I0313 10:50:12.883800 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:13.280273 master-0 kubenswrapper[7271]: I0313 10:50:13.280182 7271 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:50:13.280273 master-0 kubenswrapper[7271]: I0313 10:50:13.280225 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4b2bcf13-1356-4770-8166-5fe5047a25c9" Mar 13 10:50:13.883003 master-0 kubenswrapper[7271]: I0313 10:50:13.882907 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:13.883003 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:13.883003 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:13.883003 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:13.883316 master-0 kubenswrapper[7271]: I0313 10:50:13.883010 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:14.883703 master-0 kubenswrapper[7271]: I0313 10:50:14.883573 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:14.883703 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:14.883703 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:14.883703 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:14.884423 master-0 kubenswrapper[7271]: I0313 10:50:14.883715 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:15.676764 master-0 kubenswrapper[7271]: I0313 10:50:15.676684 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=3.676666019 podStartE2EDuration="3.676666019s" podCreationTimestamp="2026-03-13 10:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:50:15.675380614 +0000 UTC m=+870.202203004" watchObservedRunningTime="2026-03-13 10:50:15.676666019 +0000 UTC m=+870.203488409" Mar 13 10:50:15.883102 master-0 kubenswrapper[7271]: I0313 10:50:15.883033 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:15.883102 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:15.883102 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:15.883102 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:15.883102 master-0 kubenswrapper[7271]: I0313 10:50:15.883101 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:16.282687 master-0 kubenswrapper[7271]: I0313 10:50:16.282628 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:50:16.283352 master-0 kubenswrapper[7271]: I0313 10:50:16.283326 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:50:16.283557 master-0 kubenswrapper[7271]: E0313 10:50:16.283532 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:50:16.883311 master-0 kubenswrapper[7271]: I0313 10:50:16.883226 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:16.883311 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:16.883311 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:16.883311 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:16.883702 master-0 kubenswrapper[7271]: I0313 10:50:16.883332 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:17.883136 master-0 kubenswrapper[7271]: I0313 10:50:17.883018 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:17.883136 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:17.883136 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:17.883136 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:17.883136 master-0 kubenswrapper[7271]: I0313 10:50:17.883089 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:18.883256 master-0 kubenswrapper[7271]: I0313 10:50:18.883165 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:18.883256 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:18.883256 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:18.883256 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:18.883256 master-0 kubenswrapper[7271]: I0313 10:50:18.883244 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:19.883267 master-0 kubenswrapper[7271]: I0313 10:50:19.883128 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:19.883267 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:19.883267 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:19.883267 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:19.883267 master-0 kubenswrapper[7271]: I0313 10:50:19.883264 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:20.883057 master-0 kubenswrapper[7271]: I0313 10:50:20.882956 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:20.883057 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:20.883057 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:20.883057 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:20.885255 master-0 kubenswrapper[7271]: I0313 10:50:20.884727 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:21.883184 master-0 kubenswrapper[7271]: I0313 10:50:21.883103 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:21.883184 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:21.883184 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:21.883184 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:21.883184 master-0 kubenswrapper[7271]: I0313 10:50:21.883178 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:22.882301 master-0 kubenswrapper[7271]: I0313 10:50:22.882235 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:22.882301 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:22.882301 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:22.882301 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:22.882301 master-0 kubenswrapper[7271]: I0313 10:50:22.882294 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:23.017903 master-0 kubenswrapper[7271]: I0313 10:50:23.017826 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: E0313 10:50:23.018461 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95339220-324d-45e7-bdc2-e4f42fbd1d32" containerName="multus-admission-controller" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: I0313 10:50:23.018497 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="95339220-324d-45e7-bdc2-e4f42fbd1d32" containerName="multus-admission-controller" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: E0313 10:50:23.018513 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1769d48d-7ef0-48ee-9b7d-b46151ae5df6" containerName="installer" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: I0313 10:50:23.018522 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="1769d48d-7ef0-48ee-9b7d-b46151ae5df6" containerName="installer" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: E0313 10:50:23.018546 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95339220-324d-45e7-bdc2-e4f42fbd1d32" containerName="kube-rbac-proxy" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: I0313 10:50:23.018554 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="95339220-324d-45e7-bdc2-e4f42fbd1d32" containerName="kube-rbac-proxy" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: E0313 10:50:23.018572 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3bcb671-5236-49fb-8540-131f18b91fc3" containerName="installer" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: I0313 10:50:23.018580 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3bcb671-5236-49fb-8540-131f18b91fc3" containerName="installer" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: E0313 10:50:23.018609 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55a2a95-178c-4fcd-9866-3a149948d1d3" containerName="installer" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: I0313 10:50:23.018618 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55a2a95-178c-4fcd-9866-3a149948d1d3" containerName="installer" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: E0313 10:50:23.018628 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c" containerName="installer" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: I0313 10:50:23.018636 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c" containerName="installer" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: I0313 10:50:23.018879 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="95339220-324d-45e7-bdc2-e4f42fbd1d32" containerName="kube-rbac-proxy" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: I0313 10:50:23.018944 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c" containerName="installer" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: I0313 10:50:23.018968 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="a55a2a95-178c-4fcd-9866-3a149948d1d3" containerName="installer" Mar 13 10:50:23.018979 master-0 kubenswrapper[7271]: I0313 10:50:23.018985 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3bcb671-5236-49fb-8540-131f18b91fc3" containerName="installer" Mar 13 10:50:23.020125 master-0 kubenswrapper[7271]: I0313 10:50:23.019027 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="1769d48d-7ef0-48ee-9b7d-b46151ae5df6" containerName="installer" Mar 13 10:50:23.020125 master-0 kubenswrapper[7271]: I0313 10:50:23.019059 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="95339220-324d-45e7-bdc2-e4f42fbd1d32" containerName="multus-admission-controller" Mar 13 10:50:23.020125 master-0 kubenswrapper[7271]: I0313 10:50:23.019648 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 13 10:50:23.021924 master-0 kubenswrapper[7271]: I0313 10:50:23.021870 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-928wn" Mar 13 10:50:23.022043 master-0 kubenswrapper[7271]: I0313 10:50:23.021947 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 10:50:23.067916 master-0 kubenswrapper[7271]: I0313 10:50:23.030640 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Mar 13 10:50:23.087365 master-0 kubenswrapper[7271]: I0313 10:50:23.087310 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 13 10:50:23.087365 master-0 kubenswrapper[7271]: I0313 10:50:23.087373 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 13 10:50:23.087866 master-0 kubenswrapper[7271]: I0313 10:50:23.087399 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 13 10:50:23.188336 master-0 kubenswrapper[7271]: I0313 10:50:23.188164 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 13 10:50:23.188336 master-0 kubenswrapper[7271]: I0313 10:50:23.188221 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 13 10:50:23.188336 master-0 kubenswrapper[7271]: I0313 10:50:23.188307 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 13 10:50:23.188729 master-0 kubenswrapper[7271]: I0313 10:50:23.188364 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 13 10:50:23.188729 master-0 kubenswrapper[7271]: I0313 10:50:23.188369 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 13 10:50:23.205508 master-0 kubenswrapper[7271]: I0313 10:50:23.205316 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 13 10:50:23.390442 master-0 kubenswrapper[7271]: I0313 10:50:23.390327 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 13 10:50:23.827826 master-0 kubenswrapper[7271]: I0313 10:50:23.827727 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Mar 13 10:50:23.883957 master-0 kubenswrapper[7271]: I0313 10:50:23.883820 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:50:23.883957 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:50:23.883957 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:50:23.883957 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:50:23.883957 master-0 kubenswrapper[7271]: I0313 10:50:23.883894 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:50:23.883957 master-0 kubenswrapper[7271]: I0313 10:50:23.883964 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:50:23.884735 master-0 kubenswrapper[7271]: I0313 10:50:23.884668 7271 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"b7092e3092801ee7cac052ee6ef29cb5b3962e6dfa9253411baa18d8c09d2942"} pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" containerMessage="Container router failed startup probe, will be restarted" Mar 13 10:50:23.884735 master-0 kubenswrapper[7271]: I0313 10:50:23.884706 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" containerID="cri-o://b7092e3092801ee7cac052ee6ef29cb5b3962e6dfa9253411baa18d8c09d2942" gracePeriod=3600 Mar 13 10:50:24.369271 master-0 kubenswrapper[7271]: I0313 10:50:24.369185 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c","Type":"ContainerStarted","Data":"5ee286f0b3cdb47865421f7ee4618ced9d85dbc545353442dc4336443d56416e"} Mar 13 10:50:24.369271 master-0 kubenswrapper[7271]: I0313 10:50:24.369239 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c","Type":"ContainerStarted","Data":"e94e101afe6d310b4795ed9ac97800bdab3626ccbb55e076af5e0699e89feaba"} Mar 13 10:50:24.387514 master-0 kubenswrapper[7271]: I0313 10:50:24.387424 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" podStartSLOduration=1.387398826 podStartE2EDuration="1.387398826s" podCreationTimestamp="2026-03-13 10:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:50:24.385372271 +0000 UTC m=+878.912194661" watchObservedRunningTime="2026-03-13 10:50:24.387398826 +0000 UTC m=+878.914221226" Mar 13 10:50:27.646010 master-0 kubenswrapper[7271]: I0313 10:50:27.645951 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:50:27.646622 master-0 kubenswrapper[7271]: E0313 10:50:27.646250 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:50:40.645089 master-0 kubenswrapper[7271]: I0313 10:50:40.645045 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:50:40.645830 master-0 kubenswrapper[7271]: E0313 10:50:40.645356 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:50:40.814483 master-0 kubenswrapper[7271]: I0313 10:50:40.814429 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:50:53.646128 master-0 kubenswrapper[7271]: I0313 10:50:53.646065 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:50:53.646765 master-0 kubenswrapper[7271]: E0313 10:50:53.646423 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 10:50:56.565473 master-0 kubenswrapper[7271]: I0313 10:50:56.565394 7271 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 10:50:56.566101 master-0 kubenswrapper[7271]: I0313 10:50:56.565659 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" containerID="cri-o://d50222c619a1beb462f2ff2c50918ed3814098cfb9ee8c852270a8c209a51384" gracePeriod=30 Mar 13 10:50:56.566474 master-0 kubenswrapper[7271]: I0313 10:50:56.566427 7271 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 10:50:56.566730 master-0 kubenswrapper[7271]: E0313 10:50:56.566706 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.566730 master-0 kubenswrapper[7271]: I0313 10:50:56.566727 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.566813 master-0 kubenswrapper[7271]: E0313 10:50:56.566736 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.566813 master-0 kubenswrapper[7271]: I0313 10:50:56.566742 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.566813 master-0 kubenswrapper[7271]: E0313 10:50:56.566755 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.566813 master-0 kubenswrapper[7271]: I0313 10:50:56.566761 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.566813 master-0 kubenswrapper[7271]: E0313 10:50:56.566770 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 10:50:56.566813 master-0 kubenswrapper[7271]: I0313 10:50:56.566776 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 10:50:56.566813 master-0 kubenswrapper[7271]: E0313 10:50:56.566794 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 10:50:56.566813 master-0 kubenswrapper[7271]: I0313 10:50:56.566800 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 10:50:56.566813 master-0 kubenswrapper[7271]: E0313 10:50:56.566814 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.566813 master-0 kubenswrapper[7271]: I0313 10:50:56.566820 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.567205 master-0 kubenswrapper[7271]: I0313 10:50:56.566947 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 10:50:56.567205 master-0 kubenswrapper[7271]: I0313 10:50:56.566962 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.567205 master-0 kubenswrapper[7271]: I0313 10:50:56.566972 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.567205 master-0 kubenswrapper[7271]: I0313 10:50:56.566979 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.567205 master-0 kubenswrapper[7271]: I0313 10:50:56.566994 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 10:50:56.567205 master-0 kubenswrapper[7271]: I0313 10:50:56.567011 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.567205 master-0 kubenswrapper[7271]: I0313 10:50:56.567018 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.567205 master-0 kubenswrapper[7271]: E0313 10:50:56.567161 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.567205 master-0 kubenswrapper[7271]: I0313 10:50:56.567173 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.567205 master-0 kubenswrapper[7271]: E0313 10:50:56.567182 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.567205 master-0 kubenswrapper[7271]: I0313 10:50:56.567189 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.567646 master-0 kubenswrapper[7271]: I0313 10:50:56.567315 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.567646 master-0 kubenswrapper[7271]: I0313 10:50:56.567329 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.567646 master-0 kubenswrapper[7271]: E0313 10:50:56.567466 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.567646 master-0 kubenswrapper[7271]: I0313 10:50:56.567480 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 10:50:56.568348 master-0 kubenswrapper[7271]: I0313 10:50:56.568315 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:50:56.672652 master-0 kubenswrapper[7271]: I0313 10:50:56.672542 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 10:50:56.688505 master-0 kubenswrapper[7271]: I0313 10:50:56.688419 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e17958e403057a147cdad09d1abe4cda-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e17958e403057a147cdad09d1abe4cda\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:50:56.688623 master-0 kubenswrapper[7271]: I0313 10:50:56.688581 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e17958e403057a147cdad09d1abe4cda-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e17958e403057a147cdad09d1abe4cda\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:50:56.731675 master-0 kubenswrapper[7271]: I0313 10:50:56.731633 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:50:56.752804 master-0 kubenswrapper[7271]: I0313 10:50:56.752738 7271 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="3fb81ab2-f471-4c79-9f86-04df865ef3ef" Mar 13 10:50:56.789752 master-0 kubenswrapper[7271]: I0313 10:50:56.789697 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e17958e403057a147cdad09d1abe4cda-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e17958e403057a147cdad09d1abe4cda\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:50:56.789853 master-0 kubenswrapper[7271]: I0313 10:50:56.789806 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e17958e403057a147cdad09d1abe4cda-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e17958e403057a147cdad09d1abe4cda\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:50:56.789958 master-0 kubenswrapper[7271]: I0313 10:50:56.789929 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e17958e403057a147cdad09d1abe4cda-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e17958e403057a147cdad09d1abe4cda\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:50:56.790018 master-0 kubenswrapper[7271]: I0313 10:50:56.789998 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e17958e403057a147cdad09d1abe4cda-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e17958e403057a147cdad09d1abe4cda\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:50:56.890959 master-0 kubenswrapper[7271]: I0313 10:50:56.890828 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 10:50:56.890959 master-0 kubenswrapper[7271]: I0313 10:50:56.890880 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 10:50:56.890959 master-0 kubenswrapper[7271]: I0313 10:50:56.890918 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 10:50:56.890959 master-0 kubenswrapper[7271]: I0313 10:50:56.890920 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets" (OuterVolumeSpecName: "secrets") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:50:56.890959 master-0 kubenswrapper[7271]: I0313 10:50:56.890957 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config" (OuterVolumeSpecName: "config") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:50:56.891337 master-0 kubenswrapper[7271]: I0313 10:50:56.890969 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 10:50:56.891337 master-0 kubenswrapper[7271]: I0313 10:50:56.890972 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs" (OuterVolumeSpecName: "logs") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:50:56.891337 master-0 kubenswrapper[7271]: I0313 10:50:56.890986 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:50:56.891337 master-0 kubenswrapper[7271]: I0313 10:50:56.891076 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 10:50:56.891337 master-0 kubenswrapper[7271]: I0313 10:50:56.891091 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:50:56.891545 master-0 kubenswrapper[7271]: I0313 10:50:56.891438 7271 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 13 10:50:56.891545 master-0 kubenswrapper[7271]: I0313 10:50:56.891456 7271 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") on node \"master-0\" DevicePath \"\"" Mar 13 10:50:56.891545 master-0 kubenswrapper[7271]: I0313 10:50:56.891467 7271 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:50:56.891545 master-0 kubenswrapper[7271]: I0313 10:50:56.891479 7271 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 10:50:56.891545 master-0 kubenswrapper[7271]: I0313 10:50:56.891490 7271 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 13 10:50:56.968523 master-0 kubenswrapper[7271]: I0313 10:50:56.968446 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:50:56.997441 master-0 kubenswrapper[7271]: W0313 10:50:56.997368 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode17958e403057a147cdad09d1abe4cda.slice/crio-5da4052af9bea3df523fc77d05acaad5b723b4664b4df51ba6ccb980eb0d194a WatchSource:0}: Error finding container 5da4052af9bea3df523fc77d05acaad5b723b4664b4df51ba6ccb980eb0d194a: Status 404 returned error can't find the container with id 5da4052af9bea3df523fc77d05acaad5b723b4664b4df51ba6ccb980eb0d194a Mar 13 10:50:57.618392 master-0 kubenswrapper[7271]: I0313 10:50:57.618310 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e17958e403057a147cdad09d1abe4cda","Type":"ContainerStarted","Data":"138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a"} Mar 13 10:50:57.618392 master-0 kubenswrapper[7271]: I0313 10:50:57.618362 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e17958e403057a147cdad09d1abe4cda","Type":"ContainerStarted","Data":"a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c"} Mar 13 10:50:57.618392 master-0 kubenswrapper[7271]: I0313 10:50:57.618372 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e17958e403057a147cdad09d1abe4cda","Type":"ContainerStarted","Data":"5da4052af9bea3df523fc77d05acaad5b723b4664b4df51ba6ccb980eb0d194a"} Mar 13 10:50:57.620337 master-0 kubenswrapper[7271]: I0313 10:50:57.620286 7271 generic.go:334] "Generic (PLEG): container finished" podID="5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c" containerID="5ee286f0b3cdb47865421f7ee4618ced9d85dbc545353442dc4336443d56416e" exitCode=0 Mar 13 10:50:57.620415 master-0 kubenswrapper[7271]: I0313 10:50:57.620360 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c","Type":"ContainerDied","Data":"5ee286f0b3cdb47865421f7ee4618ced9d85dbc545353442dc4336443d56416e"} Mar 13 10:50:57.624253 master-0 kubenswrapper[7271]: I0313 10:50:57.624212 7271 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="d50222c619a1beb462f2ff2c50918ed3814098cfb9ee8c852270a8c209a51384" exitCode=0 Mar 13 10:50:57.624344 master-0 kubenswrapper[7271]: I0313 10:50:57.624277 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecef9696f6ed61b901e54b92a5f3382e4d7c9cf19d60275d449ceb9924469019" Mar 13 10:50:57.624344 master-0 kubenswrapper[7271]: I0313 10:50:57.624293 7271 scope.go:117] "RemoveContainer" containerID="e9000808717ea9c0e3216e703e0ba1564b42f55e959843c60a49ae0e4eb9a8e7" Mar 13 10:50:57.624516 master-0 kubenswrapper[7271]: I0313 10:50:57.624468 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 10:50:57.668762 master-0 kubenswrapper[7271]: I0313 10:50:57.668715 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78c05e1499b533b83f091333d61f045" path="/var/lib/kubelet/pods/f78c05e1499b533b83f091333d61f045/volumes" Mar 13 10:50:57.669197 master-0 kubenswrapper[7271]: I0313 10:50:57.669171 7271 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 13 10:50:57.696098 master-0 kubenswrapper[7271]: I0313 10:50:57.694310 7271 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="3fb81ab2-f471-4c79-9f86-04df865ef3ef" Mar 13 10:50:57.698045 master-0 kubenswrapper[7271]: I0313 10:50:57.698003 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 10:50:57.698045 master-0 kubenswrapper[7271]: I0313 10:50:57.698040 7271 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="3fb81ab2-f471-4c79-9f86-04df865ef3ef" Mar 13 10:50:57.707118 master-0 kubenswrapper[7271]: I0313 10:50:57.707081 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 10:50:57.707118 master-0 kubenswrapper[7271]: I0313 10:50:57.707117 7271 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="3fb81ab2-f471-4c79-9f86-04df865ef3ef" Mar 13 10:50:58.636684 master-0 kubenswrapper[7271]: I0313 10:50:58.636623 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e17958e403057a147cdad09d1abe4cda","Type":"ContainerStarted","Data":"bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef"} Mar 13 10:50:58.636684 master-0 kubenswrapper[7271]: I0313 10:50:58.636672 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e17958e403057a147cdad09d1abe4cda","Type":"ContainerStarted","Data":"c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d"} Mar 13 10:50:58.660665 master-0 kubenswrapper[7271]: I0313 10:50:58.660602 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.660569996 podStartE2EDuration="2.660569996s" podCreationTimestamp="2026-03-13 10:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:50:58.658195181 +0000 UTC m=+913.185017581" watchObservedRunningTime="2026-03-13 10:50:58.660569996 +0000 UTC m=+913.187392386" Mar 13 10:50:58.919956 master-0 kubenswrapper[7271]: I0313 10:50:58.919804 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 13 10:50:59.020142 master-0 kubenswrapper[7271]: I0313 10:50:59.020051 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-var-lock\") pod \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\" (UID: \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\") " Mar 13 10:50:59.020142 master-0 kubenswrapper[7271]: I0313 10:50:59.020149 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-var-lock" (OuterVolumeSpecName: "var-lock") pod "5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c" (UID: "5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:50:59.020478 master-0 kubenswrapper[7271]: I0313 10:50:59.020170 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-kubelet-dir\") pod \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\" (UID: \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\") " Mar 13 10:50:59.020478 master-0 kubenswrapper[7271]: I0313 10:50:59.020200 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c" (UID: "5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:50:59.020478 master-0 kubenswrapper[7271]: I0313 10:50:59.020296 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-kube-api-access\") pod \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\" (UID: \"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c\") " Mar 13 10:50:59.020785 master-0 kubenswrapper[7271]: I0313 10:50:59.020724 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:50:59.020785 master-0 kubenswrapper[7271]: I0313 10:50:59.020750 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:50:59.022929 master-0 kubenswrapper[7271]: I0313 10:50:59.022875 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c" (UID: "5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:50:59.122178 master-0 kubenswrapper[7271]: I0313 10:50:59.122121 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:50:59.646968 master-0 kubenswrapper[7271]: I0313 10:50:59.646930 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 13 10:50:59.653104 master-0 kubenswrapper[7271]: I0313 10:50:59.653058 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c","Type":"ContainerDied","Data":"e94e101afe6d310b4795ed9ac97800bdab3626ccbb55e076af5e0699e89feaba"} Mar 13 10:50:59.653104 master-0 kubenswrapper[7271]: I0313 10:50:59.653102 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e94e101afe6d310b4795ed9ac97800bdab3626ccbb55e076af5e0699e89feaba" Mar 13 10:51:06.969080 master-0 kubenswrapper[7271]: I0313 10:51:06.968965 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:51:06.969938 master-0 kubenswrapper[7271]: I0313 10:51:06.969106 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:51:06.969938 master-0 kubenswrapper[7271]: I0313 10:51:06.969122 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:51:06.969938 master-0 kubenswrapper[7271]: I0313 10:51:06.969134 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:51:06.974731 master-0 kubenswrapper[7271]: I0313 10:51:06.974670 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:51:06.975494 master-0 kubenswrapper[7271]: I0313 10:51:06.975452 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:51:07.719720 master-0 kubenswrapper[7271]: I0313 10:51:07.719670 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:51:07.721304 master-0 kubenswrapper[7271]: I0313 10:51:07.721278 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:51:10.738855 master-0 kubenswrapper[7271]: I0313 10:51:10.738791 7271 generic.go:334] "Generic (PLEG): container finished" podID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerID="b7092e3092801ee7cac052ee6ef29cb5b3962e6dfa9253411baa18d8c09d2942" exitCode=0 Mar 13 10:51:10.739445 master-0 kubenswrapper[7271]: I0313 10:51:10.738867 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" event={"ID":"eb778c86-ea51-4eab-82b8-a8e0bec0f050","Type":"ContainerDied","Data":"b7092e3092801ee7cac052ee6ef29cb5b3962e6dfa9253411baa18d8c09d2942"} Mar 13 10:51:10.739445 master-0 kubenswrapper[7271]: I0313 10:51:10.738980 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" event={"ID":"eb778c86-ea51-4eab-82b8-a8e0bec0f050","Type":"ContainerStarted","Data":"0b24465f9fb9d577096620c9e6db8d758ee4126ca9a38d85dbec64c888ecae41"} Mar 13 10:51:10.739445 master-0 kubenswrapper[7271]: I0313 10:51:10.739020 7271 scope.go:117] "RemoveContainer" containerID="355bba8a4cefe5a34bf9903f07fd7230c56e2657d48a952a7979a55c45edb0b5" Mar 13 10:51:10.881259 master-0 kubenswrapper[7271]: I0313 10:51:10.881128 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:51:10.885501 master-0 kubenswrapper[7271]: I0313 10:51:10.885429 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:10.885501 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:10.885501 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:10.885501 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:10.886006 master-0 kubenswrapper[7271]: I0313 10:51:10.885519 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:11.883208 master-0 kubenswrapper[7271]: I0313 10:51:11.883109 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:11.883208 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:11.883208 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:11.883208 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:11.883792 master-0 kubenswrapper[7271]: I0313 10:51:11.883199 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:12.880446 master-0 kubenswrapper[7271]: I0313 10:51:12.880383 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:51:12.882789 master-0 kubenswrapper[7271]: I0313 10:51:12.882712 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:12.882789 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:12.882789 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:12.882789 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:12.883098 master-0 kubenswrapper[7271]: I0313 10:51:12.882794 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:13.882786 master-0 kubenswrapper[7271]: I0313 10:51:13.882674 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:13.882786 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:13.882786 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:13.882786 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:13.882786 master-0 kubenswrapper[7271]: I0313 10:51:13.882764 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:14.882391 master-0 kubenswrapper[7271]: I0313 10:51:14.882325 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:14.882391 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:14.882391 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:14.882391 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:14.882391 master-0 kubenswrapper[7271]: I0313 10:51:14.882388 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:15.883203 master-0 kubenswrapper[7271]: I0313 10:51:15.883128 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:15.883203 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:15.883203 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:15.883203 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:15.884190 master-0 kubenswrapper[7271]: I0313 10:51:15.883223 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:16.883412 master-0 kubenswrapper[7271]: I0313 10:51:16.883351 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:16.883412 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:16.883412 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:16.883412 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:16.884096 master-0 kubenswrapper[7271]: I0313 10:51:16.883426 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:17.882335 master-0 kubenswrapper[7271]: I0313 10:51:17.882277 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:17.882335 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:17.882335 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:17.882335 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:17.882646 master-0 kubenswrapper[7271]: I0313 10:51:17.882352 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:18.320895 master-0 kubenswrapper[7271]: I0313 10:51:18.320835 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/telemeter-client-6745c97c48-vsk4v"] Mar 13 10:51:18.321456 master-0 kubenswrapper[7271]: I0313 10:51:18.321158 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" podUID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerName="telemeter-client" containerID="cri-o://83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f" gracePeriod=30 Mar 13 10:51:18.321456 master-0 kubenswrapper[7271]: I0313 10:51:18.321249 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" podUID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerName="kube-rbac-proxy" containerID="cri-o://0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4" gracePeriod=30 Mar 13 10:51:18.321456 master-0 kubenswrapper[7271]: I0313 10:51:18.321275 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" podUID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerName="reload" containerID="cri-o://3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769" gracePeriod=30 Mar 13 10:51:18.762425 master-0 kubenswrapper[7271]: I0313 10:51:18.762349 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6745c97c48-vsk4v_bc4f01ba-a729-4cc8-a2d6-b4efe197efe3/telemeter-client/0.log" Mar 13 10:51:18.762701 master-0 kubenswrapper[7271]: I0313 10:51:18.762514 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:51:18.800104 master-0 kubenswrapper[7271]: I0313 10:51:18.800040 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6745c97c48-vsk4v_bc4f01ba-a729-4cc8-a2d6-b4efe197efe3/telemeter-client/0.log" Mar 13 10:51:18.800104 master-0 kubenswrapper[7271]: I0313 10:51:18.800097 7271 generic.go:334] "Generic (PLEG): container finished" podID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerID="0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4" exitCode=0 Mar 13 10:51:18.800104 master-0 kubenswrapper[7271]: I0313 10:51:18.800114 7271 generic.go:334] "Generic (PLEG): container finished" podID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerID="3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769" exitCode=0 Mar 13 10:51:18.800104 master-0 kubenswrapper[7271]: I0313 10:51:18.800121 7271 generic.go:334] "Generic (PLEG): container finished" podID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerID="83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f" exitCode=2 Mar 13 10:51:18.800504 master-0 kubenswrapper[7271]: I0313 10:51:18.800143 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" event={"ID":"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3","Type":"ContainerDied","Data":"0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4"} Mar 13 10:51:18.800504 master-0 kubenswrapper[7271]: I0313 10:51:18.800179 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" event={"ID":"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3","Type":"ContainerDied","Data":"3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769"} Mar 13 10:51:18.800504 master-0 kubenswrapper[7271]: I0313 10:51:18.800189 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" event={"ID":"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3","Type":"ContainerDied","Data":"83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f"} Mar 13 10:51:18.800504 master-0 kubenswrapper[7271]: I0313 10:51:18.800203 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" event={"ID":"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3","Type":"ContainerDied","Data":"fc06a3a56daeb8681fdfd097c3d110b5a504914fb61e4dd4e8750b841edf5a9b"} Mar 13 10:51:18.800504 master-0 kubenswrapper[7271]: I0313 10:51:18.800185 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6745c97c48-vsk4v" Mar 13 10:51:18.800504 master-0 kubenswrapper[7271]: I0313 10:51:18.800273 7271 scope.go:117] "RemoveContainer" containerID="0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4" Mar 13 10:51:18.819656 master-0 kubenswrapper[7271]: I0313 10:51:18.819605 7271 scope.go:117] "RemoveContainer" containerID="3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769" Mar 13 10:51:18.834608 master-0 kubenswrapper[7271]: I0313 10:51:18.834556 7271 scope.go:117] "RemoveContainer" containerID="83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f" Mar 13 10:51:18.850970 master-0 kubenswrapper[7271]: I0313 10:51:18.850924 7271 scope.go:117] "RemoveContainer" containerID="0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4" Mar 13 10:51:18.851429 master-0 kubenswrapper[7271]: E0313 10:51:18.851377 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4\": container with ID starting with 0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4 not found: ID does not exist" containerID="0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4" Mar 13 10:51:18.851535 master-0 kubenswrapper[7271]: I0313 10:51:18.851490 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4"} err="failed to get container status \"0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4\": rpc error: code = NotFound desc = could not find container \"0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4\": container with ID starting with 0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4 not found: ID does not exist" Mar 13 10:51:18.851599 master-0 kubenswrapper[7271]: I0313 10:51:18.851532 7271 scope.go:117] "RemoveContainer" containerID="3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769" Mar 13 10:51:18.851920 master-0 kubenswrapper[7271]: E0313 10:51:18.851885 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769\": container with ID starting with 3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769 not found: ID does not exist" containerID="3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769" Mar 13 10:51:18.851959 master-0 kubenswrapper[7271]: I0313 10:51:18.851922 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769"} err="failed to get container status \"3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769\": rpc error: code = NotFound desc = could not find container \"3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769\": container with ID starting with 3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769 not found: ID does not exist" Mar 13 10:51:18.851959 master-0 kubenswrapper[7271]: I0313 10:51:18.851946 7271 scope.go:117] "RemoveContainer" containerID="83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f" Mar 13 10:51:18.852247 master-0 kubenswrapper[7271]: E0313 10:51:18.852210 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f\": container with ID starting with 83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f not found: ID does not exist" containerID="83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f" Mar 13 10:51:18.852288 master-0 kubenswrapper[7271]: I0313 10:51:18.852244 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f"} err="failed to get container status \"83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f\": rpc error: code = NotFound desc = could not find container \"83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f\": container with ID starting with 83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f not found: ID does not exist" Mar 13 10:51:18.852288 master-0 kubenswrapper[7271]: I0313 10:51:18.852266 7271 scope.go:117] "RemoveContainer" containerID="0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4" Mar 13 10:51:18.852598 master-0 kubenswrapper[7271]: I0313 10:51:18.852555 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4"} err="failed to get container status \"0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4\": rpc error: code = NotFound desc = could not find container \"0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4\": container with ID starting with 0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4 not found: ID does not exist" Mar 13 10:51:18.852598 master-0 kubenswrapper[7271]: I0313 10:51:18.852576 7271 scope.go:117] "RemoveContainer" containerID="3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769" Mar 13 10:51:18.852883 master-0 kubenswrapper[7271]: I0313 10:51:18.852836 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769"} err="failed to get container status \"3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769\": rpc error: code = NotFound desc = could not find container \"3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769\": container with ID starting with 3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769 not found: ID does not exist" Mar 13 10:51:18.852883 master-0 kubenswrapper[7271]: I0313 10:51:18.852869 7271 scope.go:117] "RemoveContainer" containerID="83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f" Mar 13 10:51:18.853200 master-0 kubenswrapper[7271]: I0313 10:51:18.853140 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f"} err="failed to get container status \"83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f\": rpc error: code = NotFound desc = could not find container \"83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f\": container with ID starting with 83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f not found: ID does not exist" Mar 13 10:51:18.853200 master-0 kubenswrapper[7271]: I0313 10:51:18.853171 7271 scope.go:117] "RemoveContainer" containerID="0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4" Mar 13 10:51:18.853399 master-0 kubenswrapper[7271]: I0313 10:51:18.853367 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4"} err="failed to get container status \"0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4\": rpc error: code = NotFound desc = could not find container \"0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4\": container with ID starting with 0c7f187a23acb3dc5db0a9cb8c86b14d1de1e7c5091cf788c4ed58e2fb671cf4 not found: ID does not exist" Mar 13 10:51:18.853399 master-0 kubenswrapper[7271]: I0313 10:51:18.853389 7271 scope.go:117] "RemoveContainer" containerID="3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769" Mar 13 10:51:18.853704 master-0 kubenswrapper[7271]: I0313 10:51:18.853663 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769"} err="failed to get container status \"3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769\": rpc error: code = NotFound desc = could not find container \"3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769\": container with ID starting with 3dd75777dd4d6edc01b0feaaa9b25c45bbc2a9918d88a749f851c4e1b436c769 not found: ID does not exist" Mar 13 10:51:18.853704 master-0 kubenswrapper[7271]: I0313 10:51:18.853693 7271 scope.go:117] "RemoveContainer" containerID="83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f" Mar 13 10:51:18.854200 master-0 kubenswrapper[7271]: I0313 10:51:18.854172 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f"} err="failed to get container status \"83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f\": rpc error: code = NotFound desc = could not find container \"83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f\": container with ID starting with 83b7bf76919b4f93927b4bf92aa8881156432b231feabe80a77fed5f33f3157f not found: ID does not exist" Mar 13 10:51:18.883186 master-0 kubenswrapper[7271]: I0313 10:51:18.883113 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:18.883186 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:18.883186 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:18.883186 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:18.883186 master-0 kubenswrapper[7271]: I0313 10:51:18.883170 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:18.919763 master-0 kubenswrapper[7271]: I0313 10:51:18.919717 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-metrics-client-ca\") pod \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " Mar 13 10:51:18.919763 master-0 kubenswrapper[7271]: I0313 10:51:18.919771 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-secret-telemeter-client-kube-rbac-proxy-config\") pod \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " Mar 13 10:51:18.920327 master-0 kubenswrapper[7271]: I0313 10:51:18.919811 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-serving-certs-ca-bundle\") pod \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " Mar 13 10:51:18.920327 master-0 kubenswrapper[7271]: I0313 10:51:18.920094 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndn2f\" (UniqueName: \"kubernetes.io/projected/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-kube-api-access-ndn2f\") pod \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " Mar 13 10:51:18.920327 master-0 kubenswrapper[7271]: I0313 10:51:18.920140 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-federate-client-tls\") pod \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " Mar 13 10:51:18.920327 master-0 kubenswrapper[7271]: I0313 10:51:18.920187 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-telemeter-trusted-ca-bundle\") pod \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " Mar 13 10:51:18.920327 master-0 kubenswrapper[7271]: I0313 10:51:18.920216 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-telemeter-client-tls\") pod \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " Mar 13 10:51:18.920327 master-0 kubenswrapper[7271]: I0313 10:51:18.920245 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-secret-telemeter-client\") pod \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\" (UID: \"bc4f01ba-a729-4cc8-a2d6-b4efe197efe3\") " Mar 13 10:51:18.921194 master-0 kubenswrapper[7271]: I0313 10:51:18.921164 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-telemeter-trusted-ca-bundle" (OuterVolumeSpecName: "telemeter-trusted-ca-bundle") pod "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" (UID: "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3"). InnerVolumeSpecName "telemeter-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:51:18.921251 master-0 kubenswrapper[7271]: I0313 10:51:18.921171 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" (UID: "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:51:18.922017 master-0 kubenswrapper[7271]: I0313 10:51:18.921970 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-serving-certs-ca-bundle" (OuterVolumeSpecName: "serving-certs-ca-bundle") pod "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" (UID: "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3"). InnerVolumeSpecName "serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:51:18.923963 master-0 kubenswrapper[7271]: I0313 10:51:18.923914 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-telemeter-client-tls" (OuterVolumeSpecName: "telemeter-client-tls") pod "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" (UID: "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3"). InnerVolumeSpecName "telemeter-client-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:51:18.924554 master-0 kubenswrapper[7271]: I0313 10:51:18.924491 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-kube-api-access-ndn2f" (OuterVolumeSpecName: "kube-api-access-ndn2f") pod "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" (UID: "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3"). InnerVolumeSpecName "kube-api-access-ndn2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:51:18.925445 master-0 kubenswrapper[7271]: I0313 10:51:18.925409 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-secret-telemeter-client" (OuterVolumeSpecName: "secret-telemeter-client") pod "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" (UID: "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3"). InnerVolumeSpecName "secret-telemeter-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:51:18.925571 master-0 kubenswrapper[7271]: I0313 10:51:18.925535 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-secret-telemeter-client-kube-rbac-proxy-config" (OuterVolumeSpecName: "secret-telemeter-client-kube-rbac-proxy-config") pod "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" (UID: "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3"). InnerVolumeSpecName "secret-telemeter-client-kube-rbac-proxy-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:51:18.926669 master-0 kubenswrapper[7271]: I0313 10:51:18.926634 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-federate-client-tls" (OuterVolumeSpecName: "federate-client-tls") pod "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" (UID: "bc4f01ba-a729-4cc8-a2d6-b4efe197efe3"). InnerVolumeSpecName "federate-client-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:51:19.022069 master-0 kubenswrapper[7271]: I0313 10:51:19.021663 7271 reconciler_common.go:293] "Volume detached for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 10:51:19.022069 master-0 kubenswrapper[7271]: I0313 10:51:19.021722 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndn2f\" (UniqueName: \"kubernetes.io/projected/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-kube-api-access-ndn2f\") on node \"master-0\" DevicePath \"\"" Mar 13 10:51:19.022069 master-0 kubenswrapper[7271]: I0313 10:51:19.021735 7271 reconciler_common.go:293] "Volume detached for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-federate-client-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 10:51:19.022069 master-0 kubenswrapper[7271]: I0313 10:51:19.021745 7271 reconciler_common.go:293] "Volume detached for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-telemeter-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 10:51:19.022069 master-0 kubenswrapper[7271]: I0313 10:51:19.021760 7271 reconciler_common.go:293] "Volume detached for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-telemeter-client-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 10:51:19.022069 master-0 kubenswrapper[7271]: I0313 10:51:19.021949 7271 reconciler_common.go:293] "Volume detached for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-secret-telemeter-client\") on node \"master-0\" DevicePath \"\"" Mar 13 10:51:19.022069 master-0 kubenswrapper[7271]: I0313 10:51:19.021957 7271 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:51:19.022069 master-0 kubenswrapper[7271]: I0313 10:51:19.021969 7271 reconciler_common.go:293] "Volume detached for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3-secret-telemeter-client-kube-rbac-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:51:19.156227 master-0 kubenswrapper[7271]: I0313 10:51:19.156088 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/telemeter-client-6745c97c48-vsk4v"] Mar 13 10:51:19.161381 master-0 kubenswrapper[7271]: I0313 10:51:19.161317 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/telemeter-client-6745c97c48-vsk4v"] Mar 13 10:51:19.653383 master-0 kubenswrapper[7271]: I0313 10:51:19.653317 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" path="/var/lib/kubelet/pods/bc4f01ba-a729-4cc8-a2d6-b4efe197efe3/volumes" Mar 13 10:51:19.882987 master-0 kubenswrapper[7271]: I0313 10:51:19.882944 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:19.882987 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:19.882987 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:19.882987 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:19.883356 master-0 kubenswrapper[7271]: I0313 10:51:19.883329 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:20.882905 master-0 kubenswrapper[7271]: I0313 10:51:20.882823 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:20.882905 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:20.882905 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:20.882905 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:20.883680 master-0 kubenswrapper[7271]: I0313 10:51:20.882923 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:21.883502 master-0 kubenswrapper[7271]: I0313 10:51:21.883411 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:21.883502 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:21.883502 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:21.883502 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:21.883502 master-0 kubenswrapper[7271]: I0313 10:51:21.883487 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:22.831876 master-0 kubenswrapper[7271]: I0313 10:51:22.831812 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Mar 13 10:51:22.832170 master-0 kubenswrapper[7271]: E0313 10:51:22.832129 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c" containerName="installer" Mar 13 10:51:22.832254 master-0 kubenswrapper[7271]: I0313 10:51:22.832187 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c" containerName="installer" Mar 13 10:51:22.832254 master-0 kubenswrapper[7271]: E0313 10:51:22.832212 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerName="kube-rbac-proxy" Mar 13 10:51:22.832254 master-0 kubenswrapper[7271]: I0313 10:51:22.832224 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerName="kube-rbac-proxy" Mar 13 10:51:22.832254 master-0 kubenswrapper[7271]: E0313 10:51:22.832247 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerName="telemeter-client" Mar 13 10:51:22.832254 master-0 kubenswrapper[7271]: I0313 10:51:22.832255 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerName="telemeter-client" Mar 13 10:51:22.832678 master-0 kubenswrapper[7271]: E0313 10:51:22.832273 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerName="reload" Mar 13 10:51:22.832678 master-0 kubenswrapper[7271]: I0313 10:51:22.832282 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerName="reload" Mar 13 10:51:22.832678 master-0 kubenswrapper[7271]: I0313 10:51:22.832421 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerName="telemeter-client" Mar 13 10:51:22.832678 master-0 kubenswrapper[7271]: I0313 10:51:22.832435 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c" containerName="installer" Mar 13 10:51:22.832678 master-0 kubenswrapper[7271]: I0313 10:51:22.832448 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerName="kube-rbac-proxy" Mar 13 10:51:22.832678 master-0 kubenswrapper[7271]: I0313 10:51:22.832471 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4f01ba-a729-4cc8-a2d6-b4efe197efe3" containerName="reload" Mar 13 10:51:22.833275 master-0 kubenswrapper[7271]: I0313 10:51:22.833065 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 10:51:22.837751 master-0 kubenswrapper[7271]: I0313 10:51:22.835908 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-75rf9" Mar 13 10:51:22.838432 master-0 kubenswrapper[7271]: I0313 10:51:22.838388 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 10:51:22.850879 master-0 kubenswrapper[7271]: I0313 10:51:22.850825 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Mar 13 10:51:22.882837 master-0 kubenswrapper[7271]: I0313 10:51:22.882791 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:22.882837 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:22.882837 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:22.882837 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:22.883132 master-0 kubenswrapper[7271]: I0313 10:51:22.882854 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:22.981115 master-0 kubenswrapper[7271]: I0313 10:51:22.981045 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d0a863-e526-43af-81e7-427336d845b0-kube-api-access\") pod \"installer-6-master-0\" (UID: \"e0d0a863-e526-43af-81e7-427336d845b0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 10:51:22.981115 master-0 kubenswrapper[7271]: I0313 10:51:22.981110 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d0a863-e526-43af-81e7-427336d845b0-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"e0d0a863-e526-43af-81e7-427336d845b0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 10:51:22.981796 master-0 kubenswrapper[7271]: I0313 10:51:22.981169 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0d0a863-e526-43af-81e7-427336d845b0-var-lock\") pod \"installer-6-master-0\" (UID: \"e0d0a863-e526-43af-81e7-427336d845b0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 10:51:23.082993 master-0 kubenswrapper[7271]: I0313 10:51:23.082813 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d0a863-e526-43af-81e7-427336d845b0-kube-api-access\") pod \"installer-6-master-0\" (UID: \"e0d0a863-e526-43af-81e7-427336d845b0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 10:51:23.082993 master-0 kubenswrapper[7271]: I0313 10:51:23.082873 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d0a863-e526-43af-81e7-427336d845b0-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"e0d0a863-e526-43af-81e7-427336d845b0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 10:51:23.082993 master-0 kubenswrapper[7271]: I0313 10:51:23.082927 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0d0a863-e526-43af-81e7-427336d845b0-var-lock\") pod \"installer-6-master-0\" (UID: \"e0d0a863-e526-43af-81e7-427336d845b0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 10:51:23.083337 master-0 kubenswrapper[7271]: I0313 10:51:23.083027 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0d0a863-e526-43af-81e7-427336d845b0-var-lock\") pod \"installer-6-master-0\" (UID: \"e0d0a863-e526-43af-81e7-427336d845b0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 10:51:23.083337 master-0 kubenswrapper[7271]: I0313 10:51:23.083082 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d0a863-e526-43af-81e7-427336d845b0-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"e0d0a863-e526-43af-81e7-427336d845b0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 10:51:23.100756 master-0 kubenswrapper[7271]: I0313 10:51:23.100686 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d0a863-e526-43af-81e7-427336d845b0-kube-api-access\") pod \"installer-6-master-0\" (UID: \"e0d0a863-e526-43af-81e7-427336d845b0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 10:51:23.208641 master-0 kubenswrapper[7271]: I0313 10:51:23.208462 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 10:51:23.621986 master-0 kubenswrapper[7271]: I0313 10:51:23.621926 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Mar 13 10:51:23.627105 master-0 kubenswrapper[7271]: W0313 10:51:23.627040 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode0d0a863_e526_43af_81e7_427336d845b0.slice/crio-1ab9eaaa1f8cf34f71e1913b674d1f9da187c6ac13d0953972a6b0cfd8a11121 WatchSource:0}: Error finding container 1ab9eaaa1f8cf34f71e1913b674d1f9da187c6ac13d0953972a6b0cfd8a11121: Status 404 returned error can't find the container with id 1ab9eaaa1f8cf34f71e1913b674d1f9da187c6ac13d0953972a6b0cfd8a11121 Mar 13 10:51:23.844568 master-0 kubenswrapper[7271]: I0313 10:51:23.844468 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"e0d0a863-e526-43af-81e7-427336d845b0","Type":"ContainerStarted","Data":"1ab9eaaa1f8cf34f71e1913b674d1f9da187c6ac13d0953972a6b0cfd8a11121"} Mar 13 10:51:23.883497 master-0 kubenswrapper[7271]: I0313 10:51:23.883324 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:23.883497 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:23.883497 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:23.883497 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:23.883763 master-0 kubenswrapper[7271]: I0313 10:51:23.883505 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:24.854607 master-0 kubenswrapper[7271]: I0313 10:51:24.854524 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"e0d0a863-e526-43af-81e7-427336d845b0","Type":"ContainerStarted","Data":"fe9c58db2cbc934a8ee0143a63a15e0c0fbc1471f2636da95b789cc5a70ed0f0"} Mar 13 10:51:24.875676 master-0 kubenswrapper[7271]: I0313 10:51:24.875557 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-6-master-0" podStartSLOduration=2.875539401 podStartE2EDuration="2.875539401s" podCreationTimestamp="2026-03-13 10:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:51:24.875067999 +0000 UTC m=+939.401890399" watchObservedRunningTime="2026-03-13 10:51:24.875539401 +0000 UTC m=+939.402361801" Mar 13 10:51:24.887728 master-0 kubenswrapper[7271]: I0313 10:51:24.884301 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:24.887728 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:24.887728 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:24.887728 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:24.887728 master-0 kubenswrapper[7271]: I0313 10:51:24.884394 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:25.883045 master-0 kubenswrapper[7271]: I0313 10:51:25.882982 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:25.883045 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:25.883045 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:25.883045 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:25.884232 master-0 kubenswrapper[7271]: I0313 10:51:25.883048 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:26.883139 master-0 kubenswrapper[7271]: I0313 10:51:26.883052 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:26.883139 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:26.883139 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:26.883139 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:26.883139 master-0 kubenswrapper[7271]: I0313 10:51:26.883116 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:27.882942 master-0 kubenswrapper[7271]: I0313 10:51:27.882863 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:27.882942 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:27.882942 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:27.882942 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:27.883761 master-0 kubenswrapper[7271]: I0313 10:51:27.882966 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:28.886091 master-0 kubenswrapper[7271]: I0313 10:51:28.886000 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:28.886091 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:28.886091 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:28.886091 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:28.887047 master-0 kubenswrapper[7271]: I0313 10:51:28.886101 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:29.888630 master-0 kubenswrapper[7271]: I0313 10:51:29.888541 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:29.888630 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:29.888630 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:29.888630 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:29.889722 master-0 kubenswrapper[7271]: I0313 10:51:29.888643 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:30.883725 master-0 kubenswrapper[7271]: I0313 10:51:30.883634 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:30.883725 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:30.883725 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:30.883725 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:30.884087 master-0 kubenswrapper[7271]: I0313 10:51:30.883775 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:31.395450 master-0 kubenswrapper[7271]: I0313 10:51:31.395335 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 13 10:51:31.398273 master-0 kubenswrapper[7271]: I0313 10:51:31.398212 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 10:51:31.402180 master-0 kubenswrapper[7271]: I0313 10:51:31.402129 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-928wn" Mar 13 10:51:31.403660 master-0 kubenswrapper[7271]: I0313 10:51:31.403420 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 10:51:31.412890 master-0 kubenswrapper[7271]: I0313 10:51:31.412761 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 13 10:51:31.514131 master-0 kubenswrapper[7271]: I0313 10:51:31.514055 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8bdd05f-f920-4441-969f-336c85d2da57-kube-api-access\") pod \"installer-4-master-0\" (UID: \"d8bdd05f-f920-4441-969f-336c85d2da57\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 10:51:31.514370 master-0 kubenswrapper[7271]: I0313 10:51:31.514297 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8bdd05f-f920-4441-969f-336c85d2da57-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"d8bdd05f-f920-4441-969f-336c85d2da57\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 10:51:31.514411 master-0 kubenswrapper[7271]: I0313 10:51:31.514348 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d8bdd05f-f920-4441-969f-336c85d2da57-var-lock\") pod \"installer-4-master-0\" (UID: \"d8bdd05f-f920-4441-969f-336c85d2da57\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 10:51:31.615822 master-0 kubenswrapper[7271]: I0313 10:51:31.615759 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8bdd05f-f920-4441-969f-336c85d2da57-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"d8bdd05f-f920-4441-969f-336c85d2da57\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 10:51:31.616055 master-0 kubenswrapper[7271]: I0313 10:51:31.615866 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8bdd05f-f920-4441-969f-336c85d2da57-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"d8bdd05f-f920-4441-969f-336c85d2da57\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 10:51:31.616055 master-0 kubenswrapper[7271]: I0313 10:51:31.615983 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d8bdd05f-f920-4441-969f-336c85d2da57-var-lock\") pod \"installer-4-master-0\" (UID: \"d8bdd05f-f920-4441-969f-336c85d2da57\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 10:51:31.616121 master-0 kubenswrapper[7271]: I0313 10:51:31.616061 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d8bdd05f-f920-4441-969f-336c85d2da57-var-lock\") pod \"installer-4-master-0\" (UID: \"d8bdd05f-f920-4441-969f-336c85d2da57\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 10:51:31.616352 master-0 kubenswrapper[7271]: I0313 10:51:31.616325 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8bdd05f-f920-4441-969f-336c85d2da57-kube-api-access\") pod \"installer-4-master-0\" (UID: \"d8bdd05f-f920-4441-969f-336c85d2da57\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 10:51:31.636283 master-0 kubenswrapper[7271]: I0313 10:51:31.636244 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8bdd05f-f920-4441-969f-336c85d2da57-kube-api-access\") pod \"installer-4-master-0\" (UID: \"d8bdd05f-f920-4441-969f-336c85d2da57\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 10:51:31.731156 master-0 kubenswrapper[7271]: I0313 10:51:31.730989 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 10:51:31.882274 master-0 kubenswrapper[7271]: I0313 10:51:31.882224 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:31.882274 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:31.882274 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:31.882274 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:31.882548 master-0 kubenswrapper[7271]: I0313 10:51:31.882282 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:32.167082 master-0 kubenswrapper[7271]: I0313 10:51:32.167029 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 13 10:51:32.882371 master-0 kubenswrapper[7271]: I0313 10:51:32.882318 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:32.882371 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:32.882371 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:32.882371 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:32.883044 master-0 kubenswrapper[7271]: I0313 10:51:32.882377 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:32.915454 master-0 kubenswrapper[7271]: I0313 10:51:32.915389 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"d8bdd05f-f920-4441-969f-336c85d2da57","Type":"ContainerStarted","Data":"c54439de52c783224aa04045b8c8a51003280811e42de25b97607e84d8c7daa8"} Mar 13 10:51:32.915454 master-0 kubenswrapper[7271]: I0313 10:51:32.915441 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"d8bdd05f-f920-4441-969f-336c85d2da57","Type":"ContainerStarted","Data":"a275b154e2bd75d46956f1b7e89d0825c0f4544634205616a815e2c59d1fd381"} Mar 13 10:51:32.940333 master-0 kubenswrapper[7271]: I0313 10:51:32.940249 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=1.9402272969999999 podStartE2EDuration="1.940227297s" podCreationTimestamp="2026-03-13 10:51:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:51:32.937212135 +0000 UTC m=+947.464034565" watchObservedRunningTime="2026-03-13 10:51:32.940227297 +0000 UTC m=+947.467049687" Mar 13 10:51:33.881990 master-0 kubenswrapper[7271]: I0313 10:51:33.881905 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:33.881990 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:33.881990 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:33.881990 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:33.881990 master-0 kubenswrapper[7271]: I0313 10:51:33.881974 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:34.883347 master-0 kubenswrapper[7271]: I0313 10:51:34.883276 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:34.883347 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:34.883347 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:34.883347 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:34.883971 master-0 kubenswrapper[7271]: I0313 10:51:34.883359 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:35.883520 master-0 kubenswrapper[7271]: I0313 10:51:35.883445 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:35.883520 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:35.883520 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:35.883520 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:35.883520 master-0 kubenswrapper[7271]: I0313 10:51:35.883515 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:36.882660 master-0 kubenswrapper[7271]: I0313 10:51:36.882565 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:36.882660 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:36.882660 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:36.882660 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:36.882941 master-0 kubenswrapper[7271]: I0313 10:51:36.882686 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:37.883545 master-0 kubenswrapper[7271]: I0313 10:51:37.883431 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:37.883545 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:37.883545 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:37.883545 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:37.883545 master-0 kubenswrapper[7271]: I0313 10:51:37.883489 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:38.883630 master-0 kubenswrapper[7271]: I0313 10:51:38.883493 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:38.883630 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:38.883630 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:38.883630 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:38.884668 master-0 kubenswrapper[7271]: I0313 10:51:38.883642 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:39.882773 master-0 kubenswrapper[7271]: I0313 10:51:39.882714 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:39.882773 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:39.882773 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:39.882773 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:39.882773 master-0 kubenswrapper[7271]: I0313 10:51:39.882773 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:40.883710 master-0 kubenswrapper[7271]: I0313 10:51:40.883620 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:40.883710 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:40.883710 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:40.883710 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:40.883710 master-0 kubenswrapper[7271]: I0313 10:51:40.883713 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:41.883900 master-0 kubenswrapper[7271]: I0313 10:51:41.883667 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:41.883900 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:41.883900 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:41.883900 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:41.883900 master-0 kubenswrapper[7271]: I0313 10:51:41.883783 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:42.882556 master-0 kubenswrapper[7271]: I0313 10:51:42.882508 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:42.882556 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:42.882556 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:42.882556 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:42.882947 master-0 kubenswrapper[7271]: I0313 10:51:42.882577 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:43.884746 master-0 kubenswrapper[7271]: I0313 10:51:43.884677 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:43.884746 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:43.884746 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:43.884746 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:43.885493 master-0 kubenswrapper[7271]: I0313 10:51:43.884781 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:44.882531 master-0 kubenswrapper[7271]: I0313 10:51:44.882475 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:44.882531 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:44.882531 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:44.882531 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:44.882858 master-0 kubenswrapper[7271]: I0313 10:51:44.882555 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:45.883328 master-0 kubenswrapper[7271]: I0313 10:51:45.883269 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:45.883328 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:45.883328 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:45.883328 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:45.885118 master-0 kubenswrapper[7271]: I0313 10:51:45.885068 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:46.883658 master-0 kubenswrapper[7271]: I0313 10:51:46.883551 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:46.883658 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:46.883658 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:46.883658 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:46.884582 master-0 kubenswrapper[7271]: I0313 10:51:46.883675 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:47.883629 master-0 kubenswrapper[7271]: I0313 10:51:47.883488 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:47.883629 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:47.883629 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:47.883629 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:47.883629 master-0 kubenswrapper[7271]: I0313 10:51:47.883619 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:48.883949 master-0 kubenswrapper[7271]: I0313 10:51:48.883858 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:48.883949 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:48.883949 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:48.883949 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:48.883949 master-0 kubenswrapper[7271]: I0313 10:51:48.883935 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:49.882567 master-0 kubenswrapper[7271]: I0313 10:51:49.882514 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:49.882567 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:49.882567 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:49.882567 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:49.882979 master-0 kubenswrapper[7271]: I0313 10:51:49.882578 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:50.883038 master-0 kubenswrapper[7271]: I0313 10:51:50.882960 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:50.883038 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:50.883038 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:50.883038 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:50.883038 master-0 kubenswrapper[7271]: I0313 10:51:50.883031 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:51.884399 master-0 kubenswrapper[7271]: I0313 10:51:51.884288 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:51.884399 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:51.884399 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:51.884399 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:51.884399 master-0 kubenswrapper[7271]: I0313 10:51:51.884370 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:52.056072 master-0 kubenswrapper[7271]: I0313 10:51:52.056029 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/4.log" Mar 13 10:51:52.056723 master-0 kubenswrapper[7271]: I0313 10:51:52.056687 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/3.log" Mar 13 10:51:52.057226 master-0 kubenswrapper[7271]: I0313 10:51:52.057180 7271 generic.go:334] "Generic (PLEG): container finished" podID="7667717b-fb74-456b-8615-16475cb69e98" containerID="e612f73942ab70c1904fa8093204e01d65a250553aee680c1e4249be0f185d7a" exitCode=1 Mar 13 10:51:52.057320 master-0 kubenswrapper[7271]: I0313 10:51:52.057229 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerDied","Data":"e612f73942ab70c1904fa8093204e01d65a250553aee680c1e4249be0f185d7a"} Mar 13 10:51:52.057320 master-0 kubenswrapper[7271]: I0313 10:51:52.057275 7271 scope.go:117] "RemoveContainer" containerID="ea6075437ddab13db72254693b1402fadb6322d2c4a635387e569d11ef32e573" Mar 13 10:51:52.057992 master-0 kubenswrapper[7271]: I0313 10:51:52.057915 7271 scope.go:117] "RemoveContainer" containerID="e612f73942ab70c1904fa8093204e01d65a250553aee680c1e4249be0f185d7a" Mar 13 10:51:52.058291 master-0 kubenswrapper[7271]: E0313 10:51:52.058245 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:51:52.883469 master-0 kubenswrapper[7271]: I0313 10:51:52.883403 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:52.883469 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:52.883469 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:52.883469 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:52.883469 master-0 kubenswrapper[7271]: I0313 10:51:52.883467 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:53.067368 master-0 kubenswrapper[7271]: I0313 10:51:53.067307 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/4.log" Mar 13 10:51:53.883987 master-0 kubenswrapper[7271]: I0313 10:51:53.883923 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:53.883987 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:53.883987 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:53.883987 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:53.884722 master-0 kubenswrapper[7271]: I0313 10:51:53.884667 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:54.884530 master-0 kubenswrapper[7271]: I0313 10:51:54.884467 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:54.884530 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:54.884530 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:54.884530 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:54.885871 master-0 kubenswrapper[7271]: I0313 10:51:54.885818 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:55.366043 master-0 kubenswrapper[7271]: I0313 10:51:55.365974 7271 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 10:51:55.366499 master-0 kubenswrapper[7271]: I0313 10:51:55.366433 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-cert-syncer" containerID="cri-o://82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee" gracePeriod=30 Mar 13 10:51:55.366632 master-0 kubenswrapper[7271]: I0313 10:51:55.366522 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" containerID="cri-o://15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2" gracePeriod=30 Mar 13 10:51:55.367966 master-0 kubenswrapper[7271]: I0313 10:51:55.367057 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-recovery-controller" containerID="cri-o://7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842" gracePeriod=30 Mar 13 10:51:55.369338 master-0 kubenswrapper[7271]: I0313 10:51:55.369290 7271 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 10:51:55.370425 master-0 kubenswrapper[7271]: E0313 10:51:55.369871 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="wait-for-host-port" Mar 13 10:51:55.370425 master-0 kubenswrapper[7271]: I0313 10:51:55.369917 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="wait-for-host-port" Mar 13 10:51:55.370425 master-0 kubenswrapper[7271]: E0313 10:51:55.369946 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" Mar 13 10:51:55.370425 master-0 kubenswrapper[7271]: I0313 10:51:55.369960 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" Mar 13 10:51:55.370425 master-0 kubenswrapper[7271]: E0313 10:51:55.369999 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-cert-syncer" Mar 13 10:51:55.370425 master-0 kubenswrapper[7271]: I0313 10:51:55.370012 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-cert-syncer" Mar 13 10:51:55.370425 master-0 kubenswrapper[7271]: E0313 10:51:55.370032 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-recovery-controller" Mar 13 10:51:55.370425 master-0 kubenswrapper[7271]: I0313 10:51:55.370046 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-recovery-controller" Mar 13 10:51:55.370425 master-0 kubenswrapper[7271]: I0313 10:51:55.370258 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" Mar 13 10:51:55.370425 master-0 kubenswrapper[7271]: I0313 10:51:55.370280 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-recovery-controller" Mar 13 10:51:55.370425 master-0 kubenswrapper[7271]: I0313 10:51:55.370309 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-cert-syncer" Mar 13 10:51:55.371251 master-0 kubenswrapper[7271]: E0313 10:51:55.370538 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" Mar 13 10:51:55.371251 master-0 kubenswrapper[7271]: I0313 10:51:55.370555 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" Mar 13 10:51:55.371251 master-0 kubenswrapper[7271]: I0313 10:51:55.370930 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" Mar 13 10:51:55.473729 master-0 kubenswrapper[7271]: I0313 10:51:55.473651 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:51:55.473729 master-0 kubenswrapper[7271]: I0313 10:51:55.473736 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:51:55.557257 master-0 kubenswrapper[7271]: I0313 10:51:55.557209 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler-cert-syncer/0.log" Mar 13 10:51:55.557988 master-0 kubenswrapper[7271]: I0313 10:51:55.557727 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler/0.log" Mar 13 10:51:55.558570 master-0 kubenswrapper[7271]: I0313 10:51:55.558295 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:51:55.561994 master-0 kubenswrapper[7271]: I0313 10:51:55.561861 7271 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1453f6461bf5d599ad65a4656343ee91" podUID="aa6a75ab47c06be4e74d05f552da4470" Mar 13 10:51:55.575659 master-0 kubenswrapper[7271]: I0313 10:51:55.575566 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:51:55.575659 master-0 kubenswrapper[7271]: I0313 10:51:55.575648 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:51:55.575908 master-0 kubenswrapper[7271]: I0313 10:51:55.575665 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:51:55.575908 master-0 kubenswrapper[7271]: I0313 10:51:55.575802 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:51:55.677127 master-0 kubenswrapper[7271]: I0313 10:51:55.676926 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"1453f6461bf5d599ad65a4656343ee91\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " Mar 13 10:51:55.677127 master-0 kubenswrapper[7271]: I0313 10:51:55.677084 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"1453f6461bf5d599ad65a4656343ee91\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " Mar 13 10:51:55.677515 master-0 kubenswrapper[7271]: I0313 10:51:55.677130 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "1453f6461bf5d599ad65a4656343ee91" (UID: "1453f6461bf5d599ad65a4656343ee91"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:51:55.677515 master-0 kubenswrapper[7271]: I0313 10:51:55.677297 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "1453f6461bf5d599ad65a4656343ee91" (UID: "1453f6461bf5d599ad65a4656343ee91"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:51:55.677515 master-0 kubenswrapper[7271]: I0313 10:51:55.677340 7271 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:51:55.779063 master-0 kubenswrapper[7271]: I0313 10:51:55.778981 7271 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:51:55.882854 master-0 kubenswrapper[7271]: I0313 10:51:55.882804 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:55.882854 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:55.882854 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:55.882854 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:55.882854 master-0 kubenswrapper[7271]: I0313 10:51:55.882870 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:56.094575 master-0 kubenswrapper[7271]: I0313 10:51:56.094535 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler-cert-syncer/0.log" Mar 13 10:51:56.096064 master-0 kubenswrapper[7271]: I0313 10:51:56.095126 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler/0.log" Mar 13 10:51:56.096064 master-0 kubenswrapper[7271]: I0313 10:51:56.095460 7271 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2" exitCode=0 Mar 13 10:51:56.096064 master-0 kubenswrapper[7271]: I0313 10:51:56.095482 7271 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842" exitCode=0 Mar 13 10:51:56.096064 master-0 kubenswrapper[7271]: I0313 10:51:56.095494 7271 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee" exitCode=2 Mar 13 10:51:56.096064 master-0 kubenswrapper[7271]: I0313 10:51:56.095613 7271 scope.go:117] "RemoveContainer" containerID="15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2" Mar 13 10:51:56.096064 master-0 kubenswrapper[7271]: I0313 10:51:56.095625 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:51:56.097662 master-0 kubenswrapper[7271]: I0313 10:51:56.097113 7271 generic.go:334] "Generic (PLEG): container finished" podID="e0d0a863-e526-43af-81e7-427336d845b0" containerID="fe9c58db2cbc934a8ee0143a63a15e0c0fbc1471f2636da95b789cc5a70ed0f0" exitCode=0 Mar 13 10:51:56.097662 master-0 kubenswrapper[7271]: I0313 10:51:56.097340 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"e0d0a863-e526-43af-81e7-427336d845b0","Type":"ContainerDied","Data":"fe9c58db2cbc934a8ee0143a63a15e0c0fbc1471f2636da95b789cc5a70ed0f0"} Mar 13 10:51:56.099328 master-0 kubenswrapper[7271]: I0313 10:51:56.099284 7271 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1453f6461bf5d599ad65a4656343ee91" podUID="aa6a75ab47c06be4e74d05f552da4470" Mar 13 10:51:56.116671 master-0 kubenswrapper[7271]: I0313 10:51:56.116479 7271 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1453f6461bf5d599ad65a4656343ee91" podUID="aa6a75ab47c06be4e74d05f552da4470" Mar 13 10:51:56.117805 master-0 kubenswrapper[7271]: I0313 10:51:56.116899 7271 scope.go:117] "RemoveContainer" containerID="7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842" Mar 13 10:51:56.138205 master-0 kubenswrapper[7271]: I0313 10:51:56.138163 7271 scope.go:117] "RemoveContainer" containerID="82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee" Mar 13 10:51:56.153861 master-0 kubenswrapper[7271]: I0313 10:51:56.153827 7271 scope.go:117] "RemoveContainer" containerID="43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675" Mar 13 10:51:56.167826 master-0 kubenswrapper[7271]: I0313 10:51:56.167776 7271 scope.go:117] "RemoveContainer" containerID="bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952" Mar 13 10:51:56.180372 master-0 kubenswrapper[7271]: I0313 10:51:56.180337 7271 scope.go:117] "RemoveContainer" containerID="15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2" Mar 13 10:51:56.180822 master-0 kubenswrapper[7271]: E0313 10:51:56.180800 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2\": container with ID starting with 15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2 not found: ID does not exist" containerID="15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2" Mar 13 10:51:56.180884 master-0 kubenswrapper[7271]: I0313 10:51:56.180830 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2"} err="failed to get container status \"15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2\": rpc error: code = NotFound desc = could not find container \"15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2\": container with ID starting with 15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2 not found: ID does not exist" Mar 13 10:51:56.180884 master-0 kubenswrapper[7271]: I0313 10:51:56.180850 7271 scope.go:117] "RemoveContainer" containerID="7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842" Mar 13 10:51:56.181284 master-0 kubenswrapper[7271]: E0313 10:51:56.181187 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842\": container with ID starting with 7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842 not found: ID does not exist" containerID="7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842" Mar 13 10:51:56.181284 master-0 kubenswrapper[7271]: I0313 10:51:56.181240 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842"} err="failed to get container status \"7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842\": rpc error: code = NotFound desc = could not find container \"7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842\": container with ID starting with 7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842 not found: ID does not exist" Mar 13 10:51:56.181284 master-0 kubenswrapper[7271]: I0313 10:51:56.181256 7271 scope.go:117] "RemoveContainer" containerID="82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee" Mar 13 10:51:56.181519 master-0 kubenswrapper[7271]: E0313 10:51:56.181496 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee\": container with ID starting with 82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee not found: ID does not exist" containerID="82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee" Mar 13 10:51:56.181575 master-0 kubenswrapper[7271]: I0313 10:51:56.181519 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee"} err="failed to get container status \"82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee\": rpc error: code = NotFound desc = could not find container \"82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee\": container with ID starting with 82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee not found: ID does not exist" Mar 13 10:51:56.181575 master-0 kubenswrapper[7271]: I0313 10:51:56.181533 7271 scope.go:117] "RemoveContainer" containerID="43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675" Mar 13 10:51:56.182273 master-0 kubenswrapper[7271]: E0313 10:51:56.182239 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675\": container with ID starting with 43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675 not found: ID does not exist" containerID="43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675" Mar 13 10:51:56.182273 master-0 kubenswrapper[7271]: I0313 10:51:56.182265 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675"} err="failed to get container status \"43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675\": rpc error: code = NotFound desc = could not find container \"43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675\": container with ID starting with 43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675 not found: ID does not exist" Mar 13 10:51:56.182457 master-0 kubenswrapper[7271]: I0313 10:51:56.182279 7271 scope.go:117] "RemoveContainer" containerID="bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952" Mar 13 10:51:56.182573 master-0 kubenswrapper[7271]: E0313 10:51:56.182522 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952\": container with ID starting with bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952 not found: ID does not exist" containerID="bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952" Mar 13 10:51:56.182573 master-0 kubenswrapper[7271]: I0313 10:51:56.182547 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952"} err="failed to get container status \"bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952\": rpc error: code = NotFound desc = could not find container \"bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952\": container with ID starting with bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952 not found: ID does not exist" Mar 13 10:51:56.182573 master-0 kubenswrapper[7271]: I0313 10:51:56.182561 7271 scope.go:117] "RemoveContainer" containerID="15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2" Mar 13 10:51:56.182819 master-0 kubenswrapper[7271]: I0313 10:51:56.182791 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2"} err="failed to get container status \"15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2\": rpc error: code = NotFound desc = could not find container \"15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2\": container with ID starting with 15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2 not found: ID does not exist" Mar 13 10:51:56.182819 master-0 kubenswrapper[7271]: I0313 10:51:56.182810 7271 scope.go:117] "RemoveContainer" containerID="7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842" Mar 13 10:51:56.183063 master-0 kubenswrapper[7271]: I0313 10:51:56.183035 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842"} err="failed to get container status \"7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842\": rpc error: code = NotFound desc = could not find container \"7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842\": container with ID starting with 7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842 not found: ID does not exist" Mar 13 10:51:56.183063 master-0 kubenswrapper[7271]: I0313 10:51:56.183052 7271 scope.go:117] "RemoveContainer" containerID="82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee" Mar 13 10:51:56.183277 master-0 kubenswrapper[7271]: I0313 10:51:56.183252 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee"} err="failed to get container status \"82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee\": rpc error: code = NotFound desc = could not find container \"82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee\": container with ID starting with 82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee not found: ID does not exist" Mar 13 10:51:56.183277 master-0 kubenswrapper[7271]: I0313 10:51:56.183275 7271 scope.go:117] "RemoveContainer" containerID="43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675" Mar 13 10:51:56.183534 master-0 kubenswrapper[7271]: I0313 10:51:56.183488 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675"} err="failed to get container status \"43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675\": rpc error: code = NotFound desc = could not find container \"43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675\": container with ID starting with 43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675 not found: ID does not exist" Mar 13 10:51:56.183534 master-0 kubenswrapper[7271]: I0313 10:51:56.183507 7271 scope.go:117] "RemoveContainer" containerID="bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952" Mar 13 10:51:56.183724 master-0 kubenswrapper[7271]: I0313 10:51:56.183710 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952"} err="failed to get container status \"bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952\": rpc error: code = NotFound desc = could not find container \"bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952\": container with ID starting with bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952 not found: ID does not exist" Mar 13 10:51:56.183801 master-0 kubenswrapper[7271]: I0313 10:51:56.183728 7271 scope.go:117] "RemoveContainer" containerID="15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2" Mar 13 10:51:56.183948 master-0 kubenswrapper[7271]: I0313 10:51:56.183919 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2"} err="failed to get container status \"15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2\": rpc error: code = NotFound desc = could not find container \"15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2\": container with ID starting with 15741af060118f3e381d0dfb1ee3e5fb5753e0f2ecd3bd9c1bb2cd5d2e3118b2 not found: ID does not exist" Mar 13 10:51:56.183948 master-0 kubenswrapper[7271]: I0313 10:51:56.183937 7271 scope.go:117] "RemoveContainer" containerID="7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842" Mar 13 10:51:56.184170 master-0 kubenswrapper[7271]: I0313 10:51:56.184142 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842"} err="failed to get container status \"7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842\": rpc error: code = NotFound desc = could not find container \"7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842\": container with ID starting with 7ee24573eadbe936df17a73f1ced59df1c8048f4cab5dc78db5a583fed860842 not found: ID does not exist" Mar 13 10:51:56.184170 master-0 kubenswrapper[7271]: I0313 10:51:56.184161 7271 scope.go:117] "RemoveContainer" containerID="82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee" Mar 13 10:51:56.184380 master-0 kubenswrapper[7271]: I0313 10:51:56.184360 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee"} err="failed to get container status \"82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee\": rpc error: code = NotFound desc = could not find container \"82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee\": container with ID starting with 82dadde12f5361ce0eb3b675be270a1756d0638eecca64a500eee792e9250aee not found: ID does not exist" Mar 13 10:51:56.184380 master-0 kubenswrapper[7271]: I0313 10:51:56.184379 7271 scope.go:117] "RemoveContainer" containerID="43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675" Mar 13 10:51:56.184579 master-0 kubenswrapper[7271]: I0313 10:51:56.184560 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675"} err="failed to get container status \"43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675\": rpc error: code = NotFound desc = could not find container \"43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675\": container with ID starting with 43ba0959628c64e0d608f9791dee766703f553a2832a37d8a48d8d334e5bb675 not found: ID does not exist" Mar 13 10:51:56.184579 master-0 kubenswrapper[7271]: I0313 10:51:56.184576 7271 scope.go:117] "RemoveContainer" containerID="bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952" Mar 13 10:51:56.184803 master-0 kubenswrapper[7271]: I0313 10:51:56.184783 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952"} err="failed to get container status \"bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952\": rpc error: code = NotFound desc = could not find container \"bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952\": container with ID starting with bebc8a4e65e4b1cebb33cee990d463769cc2e7c15f0f02b297e6dd5e8eb72952 not found: ID does not exist" Mar 13 10:51:56.882777 master-0 kubenswrapper[7271]: I0313 10:51:56.882654 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:56.882777 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:56.882777 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:56.882777 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:56.882777 master-0 kubenswrapper[7271]: I0313 10:51:56.882756 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:57.394786 master-0 kubenswrapper[7271]: I0313 10:51:57.394749 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 10:51:57.520279 master-0 kubenswrapper[7271]: I0313 10:51:57.520124 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0d0a863-e526-43af-81e7-427336d845b0-var-lock\") pod \"e0d0a863-e526-43af-81e7-427336d845b0\" (UID: \"e0d0a863-e526-43af-81e7-427336d845b0\") " Mar 13 10:51:57.520462 master-0 kubenswrapper[7271]: I0313 10:51:57.520291 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d0a863-e526-43af-81e7-427336d845b0-kube-api-access\") pod \"e0d0a863-e526-43af-81e7-427336d845b0\" (UID: \"e0d0a863-e526-43af-81e7-427336d845b0\") " Mar 13 10:51:57.520462 master-0 kubenswrapper[7271]: I0313 10:51:57.520287 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d0a863-e526-43af-81e7-427336d845b0-var-lock" (OuterVolumeSpecName: "var-lock") pod "e0d0a863-e526-43af-81e7-427336d845b0" (UID: "e0d0a863-e526-43af-81e7-427336d845b0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:51:57.520462 master-0 kubenswrapper[7271]: I0313 10:51:57.520330 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d0a863-e526-43af-81e7-427336d845b0-kubelet-dir\") pod \"e0d0a863-e526-43af-81e7-427336d845b0\" (UID: \"e0d0a863-e526-43af-81e7-427336d845b0\") " Mar 13 10:51:57.520576 master-0 kubenswrapper[7271]: I0313 10:51:57.520514 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d0a863-e526-43af-81e7-427336d845b0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e0d0a863-e526-43af-81e7-427336d845b0" (UID: "e0d0a863-e526-43af-81e7-427336d845b0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:51:57.520576 master-0 kubenswrapper[7271]: I0313 10:51:57.520550 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0d0a863-e526-43af-81e7-427336d845b0-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:51:57.522940 master-0 kubenswrapper[7271]: I0313 10:51:57.522889 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d0a863-e526-43af-81e7-427336d845b0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e0d0a863-e526-43af-81e7-427336d845b0" (UID: "e0d0a863-e526-43af-81e7-427336d845b0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:51:57.622192 master-0 kubenswrapper[7271]: I0313 10:51:57.622129 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d0a863-e526-43af-81e7-427336d845b0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:51:57.622192 master-0 kubenswrapper[7271]: I0313 10:51:57.622174 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d0a863-e526-43af-81e7-427336d845b0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:51:57.654665 master-0 kubenswrapper[7271]: I0313 10:51:57.654605 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1453f6461bf5d599ad65a4656343ee91" path="/var/lib/kubelet/pods/1453f6461bf5d599ad65a4656343ee91/volumes" Mar 13 10:51:57.883769 master-0 kubenswrapper[7271]: I0313 10:51:57.883702 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:57.883769 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:57.883769 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:57.883769 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:57.884205 master-0 kubenswrapper[7271]: I0313 10:51:57.883778 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:58.125538 master-0 kubenswrapper[7271]: I0313 10:51:58.125480 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"e0d0a863-e526-43af-81e7-427336d845b0","Type":"ContainerDied","Data":"1ab9eaaa1f8cf34f71e1913b674d1f9da187c6ac13d0953972a6b0cfd8a11121"} Mar 13 10:51:58.125538 master-0 kubenswrapper[7271]: I0313 10:51:58.125516 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 10:51:58.125538 master-0 kubenswrapper[7271]: I0313 10:51:58.125523 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ab9eaaa1f8cf34f71e1913b674d1f9da187c6ac13d0953972a6b0cfd8a11121" Mar 13 10:51:58.883160 master-0 kubenswrapper[7271]: I0313 10:51:58.883076 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:58.883160 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:58.883160 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:58.883160 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:58.884039 master-0 kubenswrapper[7271]: I0313 10:51:58.883191 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:51:59.883704 master-0 kubenswrapper[7271]: I0313 10:51:59.883636 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:51:59.883704 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:51:59.883704 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:51:59.883704 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:51:59.884436 master-0 kubenswrapper[7271]: I0313 10:51:59.883713 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:00.883524 master-0 kubenswrapper[7271]: I0313 10:52:00.883082 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:00.883524 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:00.883524 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:00.883524 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:00.883524 master-0 kubenswrapper[7271]: I0313 10:52:00.883198 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:01.883691 master-0 kubenswrapper[7271]: I0313 10:52:01.883621 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:01.883691 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:01.883691 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:01.883691 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:01.883691 master-0 kubenswrapper[7271]: I0313 10:52:01.883689 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:02.882755 master-0 kubenswrapper[7271]: I0313 10:52:02.882664 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:02.882755 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:02.882755 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:02.882755 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:02.882755 master-0 kubenswrapper[7271]: I0313 10:52:02.882753 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:03.885052 master-0 kubenswrapper[7271]: I0313 10:52:03.884373 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:03.885052 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:03.885052 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:03.885052 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:03.886059 master-0 kubenswrapper[7271]: I0313 10:52:03.885088 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:04.883388 master-0 kubenswrapper[7271]: I0313 10:52:04.883320 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:04.883388 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:04.883388 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:04.883388 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:04.883388 master-0 kubenswrapper[7271]: I0313 10:52:04.883386 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:05.173213 master-0 kubenswrapper[7271]: I0313 10:52:05.173093 7271 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 10:52:05.173784 master-0 kubenswrapper[7271]: I0313 10:52:05.173435 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e17958e403057a147cdad09d1abe4cda" containerName="kube-controller-manager" containerID="cri-o://a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c" gracePeriod=30 Mar 13 10:52:05.173784 master-0 kubenswrapper[7271]: I0313 10:52:05.173502 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e17958e403057a147cdad09d1abe4cda" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d" gracePeriod=30 Mar 13 10:52:05.173784 master-0 kubenswrapper[7271]: I0313 10:52:05.173500 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e17958e403057a147cdad09d1abe4cda" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef" gracePeriod=30 Mar 13 10:52:05.173784 master-0 kubenswrapper[7271]: I0313 10:52:05.173502 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e17958e403057a147cdad09d1abe4cda" containerName="cluster-policy-controller" containerID="cri-o://138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a" gracePeriod=30 Mar 13 10:52:05.174887 master-0 kubenswrapper[7271]: I0313 10:52:05.174818 7271 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 10:52:05.175227 master-0 kubenswrapper[7271]: E0313 10:52:05.175208 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17958e403057a147cdad09d1abe4cda" containerName="kube-controller-manager" Mar 13 10:52:05.175276 master-0 kubenswrapper[7271]: I0313 10:52:05.175243 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17958e403057a147cdad09d1abe4cda" containerName="kube-controller-manager" Mar 13 10:52:05.175276 master-0 kubenswrapper[7271]: E0313 10:52:05.175257 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d0a863-e526-43af-81e7-427336d845b0" containerName="installer" Mar 13 10:52:05.175368 master-0 kubenswrapper[7271]: I0313 10:52:05.175263 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d0a863-e526-43af-81e7-427336d845b0" containerName="installer" Mar 13 10:52:05.175417 master-0 kubenswrapper[7271]: E0313 10:52:05.175376 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17958e403057a147cdad09d1abe4cda" containerName="kube-controller-manager-recovery-controller" Mar 13 10:52:05.175417 master-0 kubenswrapper[7271]: I0313 10:52:05.175384 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17958e403057a147cdad09d1abe4cda" containerName="kube-controller-manager-recovery-controller" Mar 13 10:52:05.175417 master-0 kubenswrapper[7271]: E0313 10:52:05.175404 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17958e403057a147cdad09d1abe4cda" containerName="cluster-policy-controller" Mar 13 10:52:05.175417 master-0 kubenswrapper[7271]: I0313 10:52:05.175411 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17958e403057a147cdad09d1abe4cda" containerName="cluster-policy-controller" Mar 13 10:52:05.175677 master-0 kubenswrapper[7271]: E0313 10:52:05.175654 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17958e403057a147cdad09d1abe4cda" containerName="kube-controller-manager-cert-syncer" Mar 13 10:52:05.175677 master-0 kubenswrapper[7271]: I0313 10:52:05.175672 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17958e403057a147cdad09d1abe4cda" containerName="kube-controller-manager-cert-syncer" Mar 13 10:52:05.175867 master-0 kubenswrapper[7271]: I0313 10:52:05.175840 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17958e403057a147cdad09d1abe4cda" containerName="kube-controller-manager" Mar 13 10:52:05.175867 master-0 kubenswrapper[7271]: I0313 10:52:05.175857 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17958e403057a147cdad09d1abe4cda" containerName="kube-controller-manager-recovery-controller" Mar 13 10:52:05.175956 master-0 kubenswrapper[7271]: I0313 10:52:05.175897 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17958e403057a147cdad09d1abe4cda" containerName="cluster-policy-controller" Mar 13 10:52:05.175956 master-0 kubenswrapper[7271]: I0313 10:52:05.175909 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17958e403057a147cdad09d1abe4cda" containerName="kube-controller-manager-cert-syncer" Mar 13 10:52:05.175956 master-0 kubenswrapper[7271]: I0313 10:52:05.175916 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d0a863-e526-43af-81e7-427336d845b0" containerName="installer" Mar 13 10:52:05.330966 master-0 kubenswrapper[7271]: I0313 10:52:05.330910 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6aa84d96c35221e650d254cec915ee90\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:05.331145 master-0 kubenswrapper[7271]: I0313 10:52:05.330978 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6aa84d96c35221e650d254cec915ee90\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:05.365139 master-0 kubenswrapper[7271]: I0313 10:52:05.365067 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e17958e403057a147cdad09d1abe4cda/kube-controller-manager-cert-syncer/0.log" Mar 13 10:52:05.368649 master-0 kubenswrapper[7271]: I0313 10:52:05.365956 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:05.368893 master-0 kubenswrapper[7271]: I0313 10:52:05.368683 7271 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="e17958e403057a147cdad09d1abe4cda" podUID="6aa84d96c35221e650d254cec915ee90" Mar 13 10:52:05.432793 master-0 kubenswrapper[7271]: I0313 10:52:05.432644 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e17958e403057a147cdad09d1abe4cda-cert-dir\") pod \"e17958e403057a147cdad09d1abe4cda\" (UID: \"e17958e403057a147cdad09d1abe4cda\") " Mar 13 10:52:05.432793 master-0 kubenswrapper[7271]: I0313 10:52:05.432766 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e17958e403057a147cdad09d1abe4cda-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "e17958e403057a147cdad09d1abe4cda" (UID: "e17958e403057a147cdad09d1abe4cda"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:52:05.433038 master-0 kubenswrapper[7271]: I0313 10:52:05.432825 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e17958e403057a147cdad09d1abe4cda-resource-dir\") pod \"e17958e403057a147cdad09d1abe4cda\" (UID: \"e17958e403057a147cdad09d1abe4cda\") " Mar 13 10:52:05.433098 master-0 kubenswrapper[7271]: I0313 10:52:05.433014 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e17958e403057a147cdad09d1abe4cda-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "e17958e403057a147cdad09d1abe4cda" (UID: "e17958e403057a147cdad09d1abe4cda"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:52:05.433178 master-0 kubenswrapper[7271]: I0313 10:52:05.433044 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6aa84d96c35221e650d254cec915ee90\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:05.433236 master-0 kubenswrapper[7271]: I0313 10:52:05.433088 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6aa84d96c35221e650d254cec915ee90\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:05.433322 master-0 kubenswrapper[7271]: I0313 10:52:05.433285 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6aa84d96c35221e650d254cec915ee90\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:05.433417 master-0 kubenswrapper[7271]: I0313 10:52:05.433317 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6aa84d96c35221e650d254cec915ee90\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:05.433565 master-0 kubenswrapper[7271]: I0313 10:52:05.433547 7271 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e17958e403057a147cdad09d1abe4cda-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:52:05.433565 master-0 kubenswrapper[7271]: I0313 10:52:05.433565 7271 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e17958e403057a147cdad09d1abe4cda-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:52:05.653137 master-0 kubenswrapper[7271]: I0313 10:52:05.653057 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17958e403057a147cdad09d1abe4cda" path="/var/lib/kubelet/pods/e17958e403057a147cdad09d1abe4cda/volumes" Mar 13 10:52:05.884426 master-0 kubenswrapper[7271]: I0313 10:52:05.884331 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:05.884426 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:05.884426 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:05.884426 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:05.885090 master-0 kubenswrapper[7271]: I0313 10:52:05.884444 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:06.175337 master-0 kubenswrapper[7271]: I0313 10:52:06.174954 7271 generic.go:334] "Generic (PLEG): container finished" podID="d8bdd05f-f920-4441-969f-336c85d2da57" containerID="c54439de52c783224aa04045b8c8a51003280811e42de25b97607e84d8c7daa8" exitCode=0 Mar 13 10:52:06.175337 master-0 kubenswrapper[7271]: I0313 10:52:06.175032 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"d8bdd05f-f920-4441-969f-336c85d2da57","Type":"ContainerDied","Data":"c54439de52c783224aa04045b8c8a51003280811e42de25b97607e84d8c7daa8"} Mar 13 10:52:06.178430 master-0 kubenswrapper[7271]: I0313 10:52:06.178397 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e17958e403057a147cdad09d1abe4cda/kube-controller-manager-cert-syncer/0.log" Mar 13 10:52:06.179032 master-0 kubenswrapper[7271]: I0313 10:52:06.179009 7271 generic.go:334] "Generic (PLEG): container finished" podID="e17958e403057a147cdad09d1abe4cda" containerID="bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef" exitCode=0 Mar 13 10:52:06.179111 master-0 kubenswrapper[7271]: I0313 10:52:06.179035 7271 generic.go:334] "Generic (PLEG): container finished" podID="e17958e403057a147cdad09d1abe4cda" containerID="c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d" exitCode=2 Mar 13 10:52:06.179111 master-0 kubenswrapper[7271]: I0313 10:52:06.179047 7271 generic.go:334] "Generic (PLEG): container finished" podID="e17958e403057a147cdad09d1abe4cda" containerID="138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a" exitCode=0 Mar 13 10:52:06.179111 master-0 kubenswrapper[7271]: I0313 10:52:06.179057 7271 generic.go:334] "Generic (PLEG): container finished" podID="e17958e403057a147cdad09d1abe4cda" containerID="a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c" exitCode=0 Mar 13 10:52:06.179111 master-0 kubenswrapper[7271]: I0313 10:52:06.179100 7271 scope.go:117] "RemoveContainer" containerID="bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef" Mar 13 10:52:06.179279 master-0 kubenswrapper[7271]: I0313 10:52:06.179127 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:06.202430 master-0 kubenswrapper[7271]: I0313 10:52:06.202389 7271 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="e17958e403057a147cdad09d1abe4cda" podUID="6aa84d96c35221e650d254cec915ee90" Mar 13 10:52:06.206245 master-0 kubenswrapper[7271]: I0313 10:52:06.206045 7271 scope.go:117] "RemoveContainer" containerID="c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d" Mar 13 10:52:06.223608 master-0 kubenswrapper[7271]: I0313 10:52:06.223564 7271 scope.go:117] "RemoveContainer" containerID="138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a" Mar 13 10:52:06.234533 master-0 kubenswrapper[7271]: I0313 10:52:06.234496 7271 scope.go:117] "RemoveContainer" containerID="a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c" Mar 13 10:52:06.245601 master-0 kubenswrapper[7271]: I0313 10:52:06.245518 7271 scope.go:117] "RemoveContainer" containerID="bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef" Mar 13 10:52:06.246062 master-0 kubenswrapper[7271]: E0313 10:52:06.246031 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef\": container with ID starting with bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef not found: ID does not exist" containerID="bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef" Mar 13 10:52:06.246203 master-0 kubenswrapper[7271]: I0313 10:52:06.246178 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef"} err="failed to get container status \"bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef\": rpc error: code = NotFound desc = could not find container \"bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef\": container with ID starting with bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef not found: ID does not exist" Mar 13 10:52:06.246280 master-0 kubenswrapper[7271]: I0313 10:52:06.246268 7271 scope.go:117] "RemoveContainer" containerID="c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d" Mar 13 10:52:06.246724 master-0 kubenswrapper[7271]: E0313 10:52:06.246697 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d\": container with ID starting with c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d not found: ID does not exist" containerID="c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d" Mar 13 10:52:06.246798 master-0 kubenswrapper[7271]: I0313 10:52:06.246728 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d"} err="failed to get container status \"c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d\": rpc error: code = NotFound desc = could not find container \"c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d\": container with ID starting with c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d not found: ID does not exist" Mar 13 10:52:06.246798 master-0 kubenswrapper[7271]: I0313 10:52:06.246748 7271 scope.go:117] "RemoveContainer" containerID="138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a" Mar 13 10:52:06.246986 master-0 kubenswrapper[7271]: E0313 10:52:06.246968 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a\": container with ID starting with 138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a not found: ID does not exist" containerID="138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a" Mar 13 10:52:06.247080 master-0 kubenswrapper[7271]: I0313 10:52:06.247057 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a"} err="failed to get container status \"138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a\": rpc error: code = NotFound desc = could not find container \"138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a\": container with ID starting with 138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a not found: ID does not exist" Mar 13 10:52:06.247148 master-0 kubenswrapper[7271]: I0313 10:52:06.247137 7271 scope.go:117] "RemoveContainer" containerID="a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c" Mar 13 10:52:06.247484 master-0 kubenswrapper[7271]: E0313 10:52:06.247446 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c\": container with ID starting with a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c not found: ID does not exist" containerID="a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c" Mar 13 10:52:06.247548 master-0 kubenswrapper[7271]: I0313 10:52:06.247481 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c"} err="failed to get container status \"a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c\": rpc error: code = NotFound desc = could not find container \"a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c\": container with ID starting with a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c not found: ID does not exist" Mar 13 10:52:06.247548 master-0 kubenswrapper[7271]: I0313 10:52:06.247503 7271 scope.go:117] "RemoveContainer" containerID="bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef" Mar 13 10:52:06.247952 master-0 kubenswrapper[7271]: I0313 10:52:06.247929 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef"} err="failed to get container status \"bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef\": rpc error: code = NotFound desc = could not find container \"bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef\": container with ID starting with bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef not found: ID does not exist" Mar 13 10:52:06.248038 master-0 kubenswrapper[7271]: I0313 10:52:06.248026 7271 scope.go:117] "RemoveContainer" containerID="c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d" Mar 13 10:52:06.248314 master-0 kubenswrapper[7271]: I0313 10:52:06.248290 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d"} err="failed to get container status \"c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d\": rpc error: code = NotFound desc = could not find container \"c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d\": container with ID starting with c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d not found: ID does not exist" Mar 13 10:52:06.248384 master-0 kubenswrapper[7271]: I0313 10:52:06.248314 7271 scope.go:117] "RemoveContainer" containerID="138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a" Mar 13 10:52:06.248558 master-0 kubenswrapper[7271]: I0313 10:52:06.248540 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a"} err="failed to get container status \"138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a\": rpc error: code = NotFound desc = could not find container \"138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a\": container with ID starting with 138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a not found: ID does not exist" Mar 13 10:52:06.248674 master-0 kubenswrapper[7271]: I0313 10:52:06.248662 7271 scope.go:117] "RemoveContainer" containerID="a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c" Mar 13 10:52:06.248972 master-0 kubenswrapper[7271]: I0313 10:52:06.248939 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c"} err="failed to get container status \"a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c\": rpc error: code = NotFound desc = could not find container \"a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c\": container with ID starting with a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c not found: ID does not exist" Mar 13 10:52:06.248972 master-0 kubenswrapper[7271]: I0313 10:52:06.248965 7271 scope.go:117] "RemoveContainer" containerID="bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef" Mar 13 10:52:06.249183 master-0 kubenswrapper[7271]: I0313 10:52:06.249162 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef"} err="failed to get container status \"bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef\": rpc error: code = NotFound desc = could not find container \"bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef\": container with ID starting with bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef not found: ID does not exist" Mar 13 10:52:06.249229 master-0 kubenswrapper[7271]: I0313 10:52:06.249184 7271 scope.go:117] "RemoveContainer" containerID="c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d" Mar 13 10:52:06.249449 master-0 kubenswrapper[7271]: I0313 10:52:06.249432 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d"} err="failed to get container status \"c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d\": rpc error: code = NotFound desc = could not find container \"c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d\": container with ID starting with c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d not found: ID does not exist" Mar 13 10:52:06.249512 master-0 kubenswrapper[7271]: I0313 10:52:06.249501 7271 scope.go:117] "RemoveContainer" containerID="138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a" Mar 13 10:52:06.249813 master-0 kubenswrapper[7271]: I0313 10:52:06.249786 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a"} err="failed to get container status \"138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a\": rpc error: code = NotFound desc = could not find container \"138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a\": container with ID starting with 138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a not found: ID does not exist" Mar 13 10:52:06.249863 master-0 kubenswrapper[7271]: I0313 10:52:06.249814 7271 scope.go:117] "RemoveContainer" containerID="a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c" Mar 13 10:52:06.250063 master-0 kubenswrapper[7271]: I0313 10:52:06.250046 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c"} err="failed to get container status \"a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c\": rpc error: code = NotFound desc = could not find container \"a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c\": container with ID starting with a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c not found: ID does not exist" Mar 13 10:52:06.250135 master-0 kubenswrapper[7271]: I0313 10:52:06.250124 7271 scope.go:117] "RemoveContainer" containerID="bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef" Mar 13 10:52:06.250504 master-0 kubenswrapper[7271]: I0313 10:52:06.250475 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef"} err="failed to get container status \"bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef\": rpc error: code = NotFound desc = could not find container \"bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef\": container with ID starting with bed0a0a5f360361874b3b36acf32d03fb476767877a6113ef0ce46f36b0c5cef not found: ID does not exist" Mar 13 10:52:06.250504 master-0 kubenswrapper[7271]: I0313 10:52:06.250503 7271 scope.go:117] "RemoveContainer" containerID="c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d" Mar 13 10:52:06.250757 master-0 kubenswrapper[7271]: I0313 10:52:06.250737 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d"} err="failed to get container status \"c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d\": rpc error: code = NotFound desc = could not find container \"c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d\": container with ID starting with c06ee61926bd843fac7ad26a40fc98b850085f88ce6754b96020f3dba709f94d not found: ID does not exist" Mar 13 10:52:06.250841 master-0 kubenswrapper[7271]: I0313 10:52:06.250828 7271 scope.go:117] "RemoveContainer" containerID="138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a" Mar 13 10:52:06.251114 master-0 kubenswrapper[7271]: I0313 10:52:06.251097 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a"} err="failed to get container status \"138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a\": rpc error: code = NotFound desc = could not find container \"138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a\": container with ID starting with 138726ee7e5f2f9ee8caaa2c56e567145e89baabce7e51cddf38365e40f7d65a not found: ID does not exist" Mar 13 10:52:06.251190 master-0 kubenswrapper[7271]: I0313 10:52:06.251179 7271 scope.go:117] "RemoveContainer" containerID="a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c" Mar 13 10:52:06.251737 master-0 kubenswrapper[7271]: I0313 10:52:06.251491 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c"} err="failed to get container status \"a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c\": rpc error: code = NotFound desc = could not find container \"a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c\": container with ID starting with a4ef8e2a0247042ee9ee06257022e86e00f5ca68175f3c846440f8f6cacf339c not found: ID does not exist" Mar 13 10:52:06.883280 master-0 kubenswrapper[7271]: I0313 10:52:06.883216 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:06.883280 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:06.883280 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:06.883280 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:06.883653 master-0 kubenswrapper[7271]: I0313 10:52:06.883279 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:07.459303 master-0 kubenswrapper[7271]: I0313 10:52:07.459177 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 10:52:07.562894 master-0 kubenswrapper[7271]: I0313 10:52:07.562832 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8bdd05f-f920-4441-969f-336c85d2da57-kube-api-access\") pod \"d8bdd05f-f920-4441-969f-336c85d2da57\" (UID: \"d8bdd05f-f920-4441-969f-336c85d2da57\") " Mar 13 10:52:07.563151 master-0 kubenswrapper[7271]: I0313 10:52:07.563019 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d8bdd05f-f920-4441-969f-336c85d2da57-var-lock\") pod \"d8bdd05f-f920-4441-969f-336c85d2da57\" (UID: \"d8bdd05f-f920-4441-969f-336c85d2da57\") " Mar 13 10:52:07.563151 master-0 kubenswrapper[7271]: I0313 10:52:07.563116 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8bdd05f-f920-4441-969f-336c85d2da57-kubelet-dir\") pod \"d8bdd05f-f920-4441-969f-336c85d2da57\" (UID: \"d8bdd05f-f920-4441-969f-336c85d2da57\") " Mar 13 10:52:07.563278 master-0 kubenswrapper[7271]: I0313 10:52:07.563212 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8bdd05f-f920-4441-969f-336c85d2da57-var-lock" (OuterVolumeSpecName: "var-lock") pod "d8bdd05f-f920-4441-969f-336c85d2da57" (UID: "d8bdd05f-f920-4441-969f-336c85d2da57"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:52:07.563381 master-0 kubenswrapper[7271]: I0313 10:52:07.563324 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8bdd05f-f920-4441-969f-336c85d2da57-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d8bdd05f-f920-4441-969f-336c85d2da57" (UID: "d8bdd05f-f920-4441-969f-336c85d2da57"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:52:07.563830 master-0 kubenswrapper[7271]: I0313 10:52:07.563798 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8bdd05f-f920-4441-969f-336c85d2da57-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:52:07.563830 master-0 kubenswrapper[7271]: I0313 10:52:07.563820 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d8bdd05f-f920-4441-969f-336c85d2da57-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:52:07.565796 master-0 kubenswrapper[7271]: I0313 10:52:07.565760 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8bdd05f-f920-4441-969f-336c85d2da57-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d8bdd05f-f920-4441-969f-336c85d2da57" (UID: "d8bdd05f-f920-4441-969f-336c85d2da57"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:52:07.646073 master-0 kubenswrapper[7271]: I0313 10:52:07.646013 7271 scope.go:117] "RemoveContainer" containerID="e612f73942ab70c1904fa8093204e01d65a250553aee680c1e4249be0f185d7a" Mar 13 10:52:07.646333 master-0 kubenswrapper[7271]: E0313 10:52:07.646312 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:52:07.664702 master-0 kubenswrapper[7271]: I0313 10:52:07.664637 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8bdd05f-f920-4441-969f-336c85d2da57-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:52:07.883320 master-0 kubenswrapper[7271]: I0313 10:52:07.883258 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:07.883320 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:07.883320 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:07.883320 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:07.883659 master-0 kubenswrapper[7271]: I0313 10:52:07.883328 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:08.192036 master-0 kubenswrapper[7271]: I0313 10:52:08.191865 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"d8bdd05f-f920-4441-969f-336c85d2da57","Type":"ContainerDied","Data":"a275b154e2bd75d46956f1b7e89d0825c0f4544634205616a815e2c59d1fd381"} Mar 13 10:52:08.192036 master-0 kubenswrapper[7271]: I0313 10:52:08.191908 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a275b154e2bd75d46956f1b7e89d0825c0f4544634205616a815e2c59d1fd381" Mar 13 10:52:08.192036 master-0 kubenswrapper[7271]: I0313 10:52:08.191930 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 10:52:08.882878 master-0 kubenswrapper[7271]: I0313 10:52:08.882833 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:08.882878 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:08.882878 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:08.882878 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:08.883418 master-0 kubenswrapper[7271]: I0313 10:52:08.882884 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:09.883802 master-0 kubenswrapper[7271]: I0313 10:52:09.883568 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:09.883802 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:09.883802 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:09.883802 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:09.883802 master-0 kubenswrapper[7271]: I0313 10:52:09.883717 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:10.644991 master-0 kubenswrapper[7271]: I0313 10:52:10.644848 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:52:10.668072 master-0 kubenswrapper[7271]: I0313 10:52:10.667999 7271 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="52d00d9e-afb5-4268-a50f-327eaf627751" Mar 13 10:52:10.668072 master-0 kubenswrapper[7271]: I0313 10:52:10.668052 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="52d00d9e-afb5-4268-a50f-327eaf627751" Mar 13 10:52:10.681574 master-0 kubenswrapper[7271]: I0313 10:52:10.681513 7271 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:52:10.687224 master-0 kubenswrapper[7271]: I0313 10:52:10.687127 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 10:52:10.691732 master-0 kubenswrapper[7271]: I0313 10:52:10.691642 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 10:52:10.695715 master-0 kubenswrapper[7271]: I0313 10:52:10.695649 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:52:10.699426 master-0 kubenswrapper[7271]: I0313 10:52:10.699353 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 10:52:10.724716 master-0 kubenswrapper[7271]: W0313 10:52:10.724634 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa6a75ab47c06be4e74d05f552da4470.slice/crio-a63b48f34aee10c2a5c7b02c8aaeb3e69aba820b8bbd971ae28a10945e9803c8 WatchSource:0}: Error finding container a63b48f34aee10c2a5c7b02c8aaeb3e69aba820b8bbd971ae28a10945e9803c8: Status 404 returned error can't find the container with id a63b48f34aee10c2a5c7b02c8aaeb3e69aba820b8bbd971ae28a10945e9803c8 Mar 13 10:52:10.882897 master-0 kubenswrapper[7271]: I0313 10:52:10.882821 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:10.882897 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:10.882897 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:10.882897 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:10.883220 master-0 kubenswrapper[7271]: I0313 10:52:10.882934 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:11.214047 master-0 kubenswrapper[7271]: I0313 10:52:11.213913 7271 generic.go:334] "Generic (PLEG): container finished" podID="aa6a75ab47c06be4e74d05f552da4470" containerID="227c8746e47f893b6d381d14bd366358a094ecb7ef45b704033632f673e46c1d" exitCode=0 Mar 13 10:52:11.214047 master-0 kubenswrapper[7271]: I0313 10:52:11.213974 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerDied","Data":"227c8746e47f893b6d381d14bd366358a094ecb7ef45b704033632f673e46c1d"} Mar 13 10:52:11.214047 master-0 kubenswrapper[7271]: I0313 10:52:11.214032 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"a63b48f34aee10c2a5c7b02c8aaeb3e69aba820b8bbd971ae28a10945e9803c8"} Mar 13 10:52:11.882692 master-0 kubenswrapper[7271]: I0313 10:52:11.882619 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:11.882692 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:11.882692 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:11.882692 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:11.882976 master-0 kubenswrapper[7271]: I0313 10:52:11.882709 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:12.221935 master-0 kubenswrapper[7271]: I0313 10:52:12.221787 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"19dc3a66f25c011c8069f1ee0dadbbce99939d7e2ec153af7962229cb1af28b2"} Mar 13 10:52:12.221935 master-0 kubenswrapper[7271]: I0313 10:52:12.221831 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"d7b0acead87987f502b3f41605ffb9cdd08548721125b8c7786f9988d47d3b01"} Mar 13 10:52:12.221935 master-0 kubenswrapper[7271]: I0313 10:52:12.221841 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"1e5bb19d8f372bc256a34ecf958e795ff2a4e0422d6d1eb0385564047658d8ca"} Mar 13 10:52:12.221935 master-0 kubenswrapper[7271]: I0313 10:52:12.221940 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:52:12.242122 master-0 kubenswrapper[7271]: I0313 10:52:12.242043 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.242018187 podStartE2EDuration="2.242018187s" podCreationTimestamp="2026-03-13 10:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:52:12.241060331 +0000 UTC m=+986.767882721" watchObservedRunningTime="2026-03-13 10:52:12.242018187 +0000 UTC m=+986.768840577" Mar 13 10:52:12.882749 master-0 kubenswrapper[7271]: I0313 10:52:12.882680 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:12.882749 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:12.882749 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:12.882749 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:12.883137 master-0 kubenswrapper[7271]: I0313 10:52:12.882760 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:13.882461 master-0 kubenswrapper[7271]: I0313 10:52:13.882419 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:13.882461 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:13.882461 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:13.882461 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:13.883270 master-0 kubenswrapper[7271]: I0313 10:52:13.883240 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:14.881945 master-0 kubenswrapper[7271]: I0313 10:52:14.881896 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:14.881945 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:14.881945 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:14.881945 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:14.882282 master-0 kubenswrapper[7271]: I0313 10:52:14.881961 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:15.882545 master-0 kubenswrapper[7271]: I0313 10:52:15.882458 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:15.882545 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:15.882545 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:15.882545 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:15.883152 master-0 kubenswrapper[7271]: I0313 10:52:15.882566 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:16.883580 master-0 kubenswrapper[7271]: I0313 10:52:16.883473 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:16.883580 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:16.883580 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:16.883580 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:16.883580 master-0 kubenswrapper[7271]: I0313 10:52:16.883583 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:17.883422 master-0 kubenswrapper[7271]: I0313 10:52:17.883341 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:17.883422 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:17.883422 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:17.883422 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:17.884624 master-0 kubenswrapper[7271]: I0313 10:52:17.883434 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:18.884120 master-0 kubenswrapper[7271]: I0313 10:52:18.884035 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:18.884120 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:18.884120 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:18.884120 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:18.884895 master-0 kubenswrapper[7271]: I0313 10:52:18.884145 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:19.882813 master-0 kubenswrapper[7271]: I0313 10:52:19.882750 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:19.882813 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:19.882813 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:19.882813 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:19.882813 master-0 kubenswrapper[7271]: I0313 10:52:19.882815 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:20.645476 master-0 kubenswrapper[7271]: I0313 10:52:20.645412 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:20.665739 master-0 kubenswrapper[7271]: I0313 10:52:20.665694 7271 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="16c343b4-5e84-40e2-8306-49d78ab85eef" Mar 13 10:52:20.665739 master-0 kubenswrapper[7271]: I0313 10:52:20.665732 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="16c343b4-5e84-40e2-8306-49d78ab85eef" Mar 13 10:52:20.682390 master-0 kubenswrapper[7271]: I0313 10:52:20.677722 7271 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:20.686151 master-0 kubenswrapper[7271]: I0313 10:52:20.686083 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 10:52:20.690002 master-0 kubenswrapper[7271]: I0313 10:52:20.689967 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:20.696785 master-0 kubenswrapper[7271]: I0313 10:52:20.696726 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 10:52:20.703137 master-0 kubenswrapper[7271]: I0313 10:52:20.703051 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 10:52:20.717414 master-0 kubenswrapper[7271]: W0313 10:52:20.717357 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aa84d96c35221e650d254cec915ee90.slice/crio-c7ee7c0157ea1a3aa7abb463c26a21b9f9c80a7c51726be3dad9c08112783426 WatchSource:0}: Error finding container c7ee7c0157ea1a3aa7abb463c26a21b9f9c80a7c51726be3dad9c08112783426: Status 404 returned error can't find the container with id c7ee7c0157ea1a3aa7abb463c26a21b9f9c80a7c51726be3dad9c08112783426 Mar 13 10:52:20.884036 master-0 kubenswrapper[7271]: I0313 10:52:20.883968 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:20.884036 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:20.884036 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:20.884036 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:20.884242 master-0 kubenswrapper[7271]: I0313 10:52:20.884067 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:21.294549 master-0 kubenswrapper[7271]: I0313 10:52:21.294476 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6aa84d96c35221e650d254cec915ee90","Type":"ContainerStarted","Data":"9055e315c8a514a2e7caff4002ccd935f6b8f26c1543cb6f8b2224217493efae"} Mar 13 10:52:21.294549 master-0 kubenswrapper[7271]: I0313 10:52:21.294533 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6aa84d96c35221e650d254cec915ee90","Type":"ContainerStarted","Data":"b13c60bcec66207b3ea2a744a2bea2122f3896924902c67d892d48a026ec7cde"} Mar 13 10:52:21.294549 master-0 kubenswrapper[7271]: I0313 10:52:21.294547 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6aa84d96c35221e650d254cec915ee90","Type":"ContainerStarted","Data":"c7ee7c0157ea1a3aa7abb463c26a21b9f9c80a7c51726be3dad9c08112783426"} Mar 13 10:52:21.651736 master-0 kubenswrapper[7271]: I0313 10:52:21.651654 7271 scope.go:117] "RemoveContainer" containerID="e612f73942ab70c1904fa8093204e01d65a250553aee680c1e4249be0f185d7a" Mar 13 10:52:21.653998 master-0 kubenswrapper[7271]: E0313 10:52:21.651922 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:52:21.883625 master-0 kubenswrapper[7271]: I0313 10:52:21.883531 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:21.883625 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:21.883625 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:21.883625 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:21.883962 master-0 kubenswrapper[7271]: I0313 10:52:21.883664 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:22.305041 master-0 kubenswrapper[7271]: I0313 10:52:22.304906 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6aa84d96c35221e650d254cec915ee90","Type":"ContainerStarted","Data":"8ef4ca3fd55a1fdc272bbe95b06fd59615f0875eb40d0760256756564104e8c0"} Mar 13 10:52:22.305041 master-0 kubenswrapper[7271]: I0313 10:52:22.304972 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6aa84d96c35221e650d254cec915ee90","Type":"ContainerStarted","Data":"c628e765eaabffc23db2c1635eeb15519da1c1cbfb8a52269fa9da1481c956a3"} Mar 13 10:52:22.330116 master-0 kubenswrapper[7271]: I0313 10:52:22.329995 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.329963575 podStartE2EDuration="2.329963575s" podCreationTimestamp="2026-03-13 10:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:52:22.324070752 +0000 UTC m=+996.850893152" watchObservedRunningTime="2026-03-13 10:52:22.329963575 +0000 UTC m=+996.856785975" Mar 13 10:52:22.883477 master-0 kubenswrapper[7271]: I0313 10:52:22.883335 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:22.883477 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:22.883477 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:22.883477 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:22.883477 master-0 kubenswrapper[7271]: I0313 10:52:22.883471 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:23.883406 master-0 kubenswrapper[7271]: I0313 10:52:23.883332 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:23.883406 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:23.883406 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:23.883406 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:23.884403 master-0 kubenswrapper[7271]: I0313 10:52:23.883431 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:24.882772 master-0 kubenswrapper[7271]: I0313 10:52:24.882676 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:24.882772 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:24.882772 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:24.882772 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:24.883375 master-0 kubenswrapper[7271]: I0313 10:52:24.882803 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:25.882753 master-0 kubenswrapper[7271]: I0313 10:52:25.882649 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:25.882753 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:25.882753 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:25.882753 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:25.883973 master-0 kubenswrapper[7271]: I0313 10:52:25.882762 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:26.883070 master-0 kubenswrapper[7271]: I0313 10:52:26.882987 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:26.883070 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:26.883070 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:26.883070 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:26.883718 master-0 kubenswrapper[7271]: I0313 10:52:26.883109 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:27.883914 master-0 kubenswrapper[7271]: I0313 10:52:27.883848 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:27.883914 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:27.883914 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:27.883914 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:27.884860 master-0 kubenswrapper[7271]: I0313 10:52:27.883940 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:28.883508 master-0 kubenswrapper[7271]: I0313 10:52:28.883407 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:28.883508 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:28.883508 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:28.883508 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:28.884867 master-0 kubenswrapper[7271]: I0313 10:52:28.883524 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:29.888125 master-0 kubenswrapper[7271]: I0313 10:52:29.888057 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:29.888125 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:29.888125 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:29.888125 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:29.888125 master-0 kubenswrapper[7271]: I0313 10:52:29.888123 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:30.691291 master-0 kubenswrapper[7271]: I0313 10:52:30.691202 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:30.691897 master-0 kubenswrapper[7271]: I0313 10:52:30.691320 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:30.691897 master-0 kubenswrapper[7271]: I0313 10:52:30.691339 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:30.691897 master-0 kubenswrapper[7271]: I0313 10:52:30.691376 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:30.697221 master-0 kubenswrapper[7271]: I0313 10:52:30.697168 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:30.697874 master-0 kubenswrapper[7271]: I0313 10:52:30.697844 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:30.883821 master-0 kubenswrapper[7271]: I0313 10:52:30.883719 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:30.883821 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:30.883821 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:30.883821 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:30.884258 master-0 kubenswrapper[7271]: I0313 10:52:30.883857 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:31.432514 master-0 kubenswrapper[7271]: I0313 10:52:31.432436 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:31.433805 master-0 kubenswrapper[7271]: I0313 10:52:31.432838 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:52:31.883304 master-0 kubenswrapper[7271]: I0313 10:52:31.883165 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:31.883304 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:31.883304 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:31.883304 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:31.884009 master-0 kubenswrapper[7271]: I0313 10:52:31.883317 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:32.883510 master-0 kubenswrapper[7271]: I0313 10:52:32.883402 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:32.883510 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:32.883510 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:32.883510 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:32.884204 master-0 kubenswrapper[7271]: I0313 10:52:32.883580 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:33.883256 master-0 kubenswrapper[7271]: I0313 10:52:33.883155 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:33.883256 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:33.883256 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:33.883256 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:33.884281 master-0 kubenswrapper[7271]: I0313 10:52:33.883295 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:34.884070 master-0 kubenswrapper[7271]: I0313 10:52:34.883997 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:34.884070 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:34.884070 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:34.884070 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:34.884070 master-0 kubenswrapper[7271]: I0313 10:52:34.884065 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:35.883757 master-0 kubenswrapper[7271]: I0313 10:52:35.883664 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:35.883757 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:35.883757 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:35.883757 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:35.884130 master-0 kubenswrapper[7271]: I0313 10:52:35.883797 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:36.645522 master-0 kubenswrapper[7271]: I0313 10:52:36.645466 7271 scope.go:117] "RemoveContainer" containerID="e612f73942ab70c1904fa8093204e01d65a250553aee680c1e4249be0f185d7a" Mar 13 10:52:36.645861 master-0 kubenswrapper[7271]: E0313 10:52:36.645689 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:52:36.883995 master-0 kubenswrapper[7271]: I0313 10:52:36.883870 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:36.883995 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:36.883995 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:36.883995 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:36.883995 master-0 kubenswrapper[7271]: I0313 10:52:36.883982 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:37.883470 master-0 kubenswrapper[7271]: I0313 10:52:37.883351 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:37.883470 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:37.883470 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:37.883470 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:37.883470 master-0 kubenswrapper[7271]: I0313 10:52:37.883429 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:38.883189 master-0 kubenswrapper[7271]: I0313 10:52:38.883094 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:38.883189 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:38.883189 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:38.883189 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:38.883189 master-0 kubenswrapper[7271]: I0313 10:52:38.883160 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:39.882812 master-0 kubenswrapper[7271]: I0313 10:52:39.882745 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:39.882812 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:39.882812 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:39.882812 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:39.883204 master-0 kubenswrapper[7271]: I0313 10:52:39.882834 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:40.882247 master-0 kubenswrapper[7271]: I0313 10:52:40.882150 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:40.882247 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:40.882247 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:40.882247 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:40.882983 master-0 kubenswrapper[7271]: I0313 10:52:40.882266 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:41.883737 master-0 kubenswrapper[7271]: I0313 10:52:41.883661 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:41.883737 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:41.883737 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:41.883737 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:41.884484 master-0 kubenswrapper[7271]: I0313 10:52:41.883741 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:42.883495 master-0 kubenswrapper[7271]: I0313 10:52:42.883425 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:42.883495 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:42.883495 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:42.883495 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:42.884139 master-0 kubenswrapper[7271]: I0313 10:52:42.883520 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:43.882359 master-0 kubenswrapper[7271]: I0313 10:52:43.882307 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:43.882359 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:43.882359 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:43.882359 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:43.882675 master-0 kubenswrapper[7271]: I0313 10:52:43.882374 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:44.883304 master-0 kubenswrapper[7271]: I0313 10:52:44.883246 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:44.883304 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:44.883304 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:44.883304 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:44.883964 master-0 kubenswrapper[7271]: I0313 10:52:44.883318 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:45.883803 master-0 kubenswrapper[7271]: I0313 10:52:45.883726 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:45.883803 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:45.883803 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:45.883803 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:45.883803 master-0 kubenswrapper[7271]: I0313 10:52:45.883800 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:46.882179 master-0 kubenswrapper[7271]: I0313 10:52:46.882124 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:46.882179 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:46.882179 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:46.882179 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:46.882179 master-0 kubenswrapper[7271]: I0313 10:52:46.882181 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:47.883003 master-0 kubenswrapper[7271]: I0313 10:52:47.882950 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:47.883003 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:47.883003 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:47.883003 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:47.883835 master-0 kubenswrapper[7271]: I0313 10:52:47.883012 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:48.882410 master-0 kubenswrapper[7271]: I0313 10:52:48.882318 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:48.882410 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:48.882410 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:48.882410 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:48.882731 master-0 kubenswrapper[7271]: I0313 10:52:48.882436 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:49.884023 master-0 kubenswrapper[7271]: I0313 10:52:49.883942 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:49.884023 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:49.884023 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:49.884023 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:49.884023 master-0 kubenswrapper[7271]: I0313 10:52:49.884010 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:50.883062 master-0 kubenswrapper[7271]: I0313 10:52:50.882991 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:50.883062 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:50.883062 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:50.883062 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:50.883418 master-0 kubenswrapper[7271]: I0313 10:52:50.883064 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:51.645496 master-0 kubenswrapper[7271]: I0313 10:52:51.645418 7271 scope.go:117] "RemoveContainer" containerID="e612f73942ab70c1904fa8093204e01d65a250553aee680c1e4249be0f185d7a" Mar 13 10:52:51.646208 master-0 kubenswrapper[7271]: E0313 10:52:51.645664 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:52:51.882261 master-0 kubenswrapper[7271]: I0313 10:52:51.882140 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:51.882261 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:51.882261 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:51.882261 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:51.882261 master-0 kubenswrapper[7271]: I0313 10:52:51.882203 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:52.883113 master-0 kubenswrapper[7271]: I0313 10:52:52.883054 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:52.883113 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:52.883113 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:52.883113 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:52.883745 master-0 kubenswrapper[7271]: I0313 10:52:52.883117 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:53.883520 master-0 kubenswrapper[7271]: I0313 10:52:53.883147 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:53.883520 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:53.883520 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:53.883520 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:53.883520 master-0 kubenswrapper[7271]: I0313 10:52:53.883218 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:54.884143 master-0 kubenswrapper[7271]: I0313 10:52:54.884057 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:54.884143 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:54.884143 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:54.884143 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:54.885013 master-0 kubenswrapper[7271]: I0313 10:52:54.884165 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:55.882670 master-0 kubenswrapper[7271]: I0313 10:52:55.882615 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:55.882670 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:55.882670 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:55.882670 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:55.882670 master-0 kubenswrapper[7271]: I0313 10:52:55.882672 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:56.883886 master-0 kubenswrapper[7271]: I0313 10:52:56.883833 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:56.883886 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:56.883886 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:56.883886 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:56.884778 master-0 kubenswrapper[7271]: I0313 10:52:56.884745 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:57.882772 master-0 kubenswrapper[7271]: I0313 10:52:57.882721 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:57.882772 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:57.882772 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:57.882772 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:57.883060 master-0 kubenswrapper[7271]: I0313 10:52:57.882781 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:58.883382 master-0 kubenswrapper[7271]: I0313 10:52:58.883312 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:58.883382 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:58.883382 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:58.883382 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:58.884162 master-0 kubenswrapper[7271]: I0313 10:52:58.883391 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:52:59.882318 master-0 kubenswrapper[7271]: I0313 10:52:59.882230 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:52:59.882318 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:52:59.882318 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:52:59.882318 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:52:59.882318 master-0 kubenswrapper[7271]: I0313 10:52:59.882290 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:53:00.700902 master-0 kubenswrapper[7271]: I0313 10:53:00.700836 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:53:00.882252 master-0 kubenswrapper[7271]: I0313 10:53:00.882180 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:53:00.882252 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:53:00.882252 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:53:00.882252 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:53:00.882661 master-0 kubenswrapper[7271]: I0313 10:53:00.882259 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:53:01.883320 master-0 kubenswrapper[7271]: I0313 10:53:01.883254 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:53:01.883320 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:53:01.883320 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:53:01.883320 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:53:01.883947 master-0 kubenswrapper[7271]: I0313 10:53:01.883333 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:53:02.882867 master-0 kubenswrapper[7271]: I0313 10:53:02.882790 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:53:02.882867 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:53:02.882867 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:53:02.882867 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:53:02.882867 master-0 kubenswrapper[7271]: I0313 10:53:02.882853 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:53:03.884060 master-0 kubenswrapper[7271]: I0313 10:53:03.883995 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:53:03.884060 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:53:03.884060 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:53:03.884060 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:53:03.884719 master-0 kubenswrapper[7271]: I0313 10:53:03.884064 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:53:04.883072 master-0 kubenswrapper[7271]: I0313 10:53:04.882995 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:53:04.883072 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:53:04.883072 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:53:04.883072 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:53:04.883384 master-0 kubenswrapper[7271]: I0313 10:53:04.883097 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:53:05.884631 master-0 kubenswrapper[7271]: I0313 10:53:05.884097 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:53:05.884631 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:53:05.884631 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:53:05.884631 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:53:05.884631 master-0 kubenswrapper[7271]: I0313 10:53:05.884210 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:53:06.645929 master-0 kubenswrapper[7271]: I0313 10:53:06.645877 7271 scope.go:117] "RemoveContainer" containerID="e612f73942ab70c1904fa8093204e01d65a250553aee680c1e4249be0f185d7a" Mar 13 10:53:06.646545 master-0 kubenswrapper[7271]: E0313 10:53:06.646521 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:53:06.884227 master-0 kubenswrapper[7271]: I0313 10:53:06.884148 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:53:06.884227 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:53:06.884227 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:53:06.884227 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:53:06.885389 master-0 kubenswrapper[7271]: I0313 10:53:06.884263 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:53:07.883947 master-0 kubenswrapper[7271]: I0313 10:53:07.883829 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:53:07.883947 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:53:07.883947 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:53:07.883947 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:53:07.884663 master-0 kubenswrapper[7271]: I0313 10:53:07.883979 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:53:08.883905 master-0 kubenswrapper[7271]: I0313 10:53:08.883827 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:53:08.883905 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:53:08.883905 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:53:08.883905 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:53:08.884794 master-0 kubenswrapper[7271]: I0313 10:53:08.883943 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:53:09.883759 master-0 kubenswrapper[7271]: I0313 10:53:09.883504 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:53:09.883759 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:53:09.883759 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:53:09.883759 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:53:09.884917 master-0 kubenswrapper[7271]: I0313 10:53:09.883776 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:53:09.884917 master-0 kubenswrapper[7271]: I0313 10:53:09.883908 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:53:09.885423 master-0 kubenswrapper[7271]: I0313 10:53:09.885350 7271 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"0b24465f9fb9d577096620c9e6db8d758ee4126ca9a38d85dbec64c888ecae41"} pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" containerMessage="Container router failed startup probe, will be restarted" Mar 13 10:53:09.885508 master-0 kubenswrapper[7271]: I0313 10:53:09.885445 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" containerID="cri-o://0b24465f9fb9d577096620c9e6db8d758ee4126ca9a38d85dbec64c888ecae41" gracePeriod=3600 Mar 13 10:53:21.645504 master-0 kubenswrapper[7271]: I0313 10:53:21.645419 7271 scope.go:117] "RemoveContainer" containerID="e612f73942ab70c1904fa8093204e01d65a250553aee680c1e4249be0f185d7a" Mar 13 10:53:21.794303 master-0 kubenswrapper[7271]: I0313 10:53:21.794238 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/4.log" Mar 13 10:53:21.794810 master-0 kubenswrapper[7271]: I0313 10:53:21.794619 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerStarted","Data":"8f8b390ae6e4a037523aeb2d8c83e0584313e3d0ff96486ce09d9902d1586cb0"} Mar 13 10:53:56.531928 master-0 kubenswrapper[7271]: I0313 10:53:56.531740 7271 generic.go:334] "Generic (PLEG): container finished" podID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerID="0b24465f9fb9d577096620c9e6db8d758ee4126ca9a38d85dbec64c888ecae41" exitCode=0 Mar 13 10:53:56.531928 master-0 kubenswrapper[7271]: I0313 10:53:56.531817 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" event={"ID":"eb778c86-ea51-4eab-82b8-a8e0bec0f050","Type":"ContainerDied","Data":"0b24465f9fb9d577096620c9e6db8d758ee4126ca9a38d85dbec64c888ecae41"} Mar 13 10:53:56.531928 master-0 kubenswrapper[7271]: I0313 10:53:56.531870 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" event={"ID":"eb778c86-ea51-4eab-82b8-a8e0bec0f050","Type":"ContainerStarted","Data":"05d3fbc3bee5182c5f9073fc6b00e89e15ad18832a082318ea3763b5cb1e923e"} Mar 13 10:53:56.531928 master-0 kubenswrapper[7271]: I0313 10:53:56.531894 7271 scope.go:117] "RemoveContainer" containerID="b7092e3092801ee7cac052ee6ef29cb5b3962e6dfa9253411baa18d8c09d2942" Mar 13 10:53:56.880762 master-0 kubenswrapper[7271]: I0313 10:53:56.880675 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:53:56.884141 master-0 kubenswrapper[7271]: I0313 10:53:56.884097 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:53:56.884141 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:53:56.884141 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:53:56.884141 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:53:56.884397 master-0 kubenswrapper[7271]: I0313 10:53:56.884169 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:53:57.883880 master-0 kubenswrapper[7271]: I0313 10:53:57.883793 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:53:57.883880 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:53:57.883880 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:53:57.883880 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:53:57.885147 master-0 kubenswrapper[7271]: I0313 10:53:57.883916 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:53:58.884883 master-0 kubenswrapper[7271]: I0313 10:53:58.884786 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:53:58.884883 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:53:58.884883 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:53:58.884883 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:53:58.885992 master-0 kubenswrapper[7271]: I0313 10:53:58.884899 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:53:59.888621 master-0 kubenswrapper[7271]: I0313 10:53:59.885261 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:53:59.888621 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:53:59.888621 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:53:59.888621 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:53:59.888621 master-0 kubenswrapper[7271]: I0313 10:53:59.885328 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:00.884063 master-0 kubenswrapper[7271]: I0313 10:54:00.884004 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:00.884063 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:00.884063 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:00.884063 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:00.884543 master-0 kubenswrapper[7271]: I0313 10:54:00.884503 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:01.883085 master-0 kubenswrapper[7271]: I0313 10:54:01.883020 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:01.883085 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:01.883085 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:01.883085 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:01.883872 master-0 kubenswrapper[7271]: I0313 10:54:01.883098 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:02.880903 master-0 kubenswrapper[7271]: I0313 10:54:02.880842 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:54:02.883412 master-0 kubenswrapper[7271]: I0313 10:54:02.883378 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:02.883412 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:02.883412 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:02.883412 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:02.883891 master-0 kubenswrapper[7271]: I0313 10:54:02.883438 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:03.883088 master-0 kubenswrapper[7271]: I0313 10:54:03.883021 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:03.883088 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:03.883088 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:03.883088 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:03.884180 master-0 kubenswrapper[7271]: I0313 10:54:03.884053 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:04.882759 master-0 kubenswrapper[7271]: I0313 10:54:04.882695 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:04.882759 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:04.882759 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:04.882759 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:04.883188 master-0 kubenswrapper[7271]: I0313 10:54:04.882767 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:05.882988 master-0 kubenswrapper[7271]: I0313 10:54:05.882927 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:05.882988 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:05.882988 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:05.882988 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:05.882988 master-0 kubenswrapper[7271]: I0313 10:54:05.882987 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:06.883023 master-0 kubenswrapper[7271]: I0313 10:54:06.882941 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:06.883023 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:06.883023 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:06.883023 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:06.883731 master-0 kubenswrapper[7271]: I0313 10:54:06.883069 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:07.883680 master-0 kubenswrapper[7271]: I0313 10:54:07.883612 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:07.883680 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:07.883680 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:07.883680 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:07.884708 master-0 kubenswrapper[7271]: I0313 10:54:07.884657 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:08.884184 master-0 kubenswrapper[7271]: I0313 10:54:08.884079 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:08.884184 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:08.884184 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:08.884184 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:08.884184 master-0 kubenswrapper[7271]: I0313 10:54:08.884171 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:09.882851 master-0 kubenswrapper[7271]: I0313 10:54:09.882715 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:09.882851 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:09.882851 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:09.882851 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:09.883426 master-0 kubenswrapper[7271]: I0313 10:54:09.882864 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:10.883218 master-0 kubenswrapper[7271]: I0313 10:54:10.883155 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:10.883218 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:10.883218 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:10.883218 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:10.883917 master-0 kubenswrapper[7271]: I0313 10:54:10.883221 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:11.884228 master-0 kubenswrapper[7271]: I0313 10:54:11.884127 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:11.884228 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:11.884228 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:11.884228 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:11.884228 master-0 kubenswrapper[7271]: I0313 10:54:11.884219 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:12.883695 master-0 kubenswrapper[7271]: I0313 10:54:12.883565 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:12.883695 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:12.883695 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:12.883695 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:12.883695 master-0 kubenswrapper[7271]: I0313 10:54:12.883662 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:13.884865 master-0 kubenswrapper[7271]: I0313 10:54:13.884743 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:13.884865 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:13.884865 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:13.884865 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:13.885753 master-0 kubenswrapper[7271]: I0313 10:54:13.884882 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:14.882397 master-0 kubenswrapper[7271]: I0313 10:54:14.882319 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:14.882397 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:14.882397 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:14.882397 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:14.882397 master-0 kubenswrapper[7271]: I0313 10:54:14.882393 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:15.883766 master-0 kubenswrapper[7271]: I0313 10:54:15.883667 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:15.883766 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:15.883766 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:15.883766 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:15.885061 master-0 kubenswrapper[7271]: I0313 10:54:15.883773 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:16.883130 master-0 kubenswrapper[7271]: I0313 10:54:16.883045 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:16.883130 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:16.883130 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:16.883130 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:16.883130 master-0 kubenswrapper[7271]: I0313 10:54:16.883125 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:17.883288 master-0 kubenswrapper[7271]: I0313 10:54:17.883202 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:17.883288 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:17.883288 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:17.883288 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:17.883984 master-0 kubenswrapper[7271]: I0313 10:54:17.883318 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:18.883322 master-0 kubenswrapper[7271]: I0313 10:54:18.883260 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:18.883322 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:18.883322 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:18.883322 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:18.884149 master-0 kubenswrapper[7271]: I0313 10:54:18.883329 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:19.883442 master-0 kubenswrapper[7271]: I0313 10:54:19.883342 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:19.883442 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:19.883442 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:19.883442 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:19.884249 master-0 kubenswrapper[7271]: I0313 10:54:19.883467 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:20.883513 master-0 kubenswrapper[7271]: I0313 10:54:20.883406 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:20.883513 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:20.883513 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:20.883513 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:20.884353 master-0 kubenswrapper[7271]: I0313 10:54:20.883531 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:21.883702 master-0 kubenswrapper[7271]: I0313 10:54:21.883625 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:21.883702 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:21.883702 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:21.883702 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:21.884476 master-0 kubenswrapper[7271]: I0313 10:54:21.883722 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:22.884073 master-0 kubenswrapper[7271]: I0313 10:54:22.884007 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:22.884073 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:22.884073 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:22.884073 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:22.884844 master-0 kubenswrapper[7271]: I0313 10:54:22.884816 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:23.883922 master-0 kubenswrapper[7271]: I0313 10:54:23.883837 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:23.883922 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:23.883922 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:23.883922 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:23.884755 master-0 kubenswrapper[7271]: I0313 10:54:23.883937 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:24.883993 master-0 kubenswrapper[7271]: I0313 10:54:24.883923 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:24.883993 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:24.883993 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:24.883993 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:24.884708 master-0 kubenswrapper[7271]: I0313 10:54:24.884031 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:25.883828 master-0 kubenswrapper[7271]: I0313 10:54:25.883655 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:25.883828 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:25.883828 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:25.883828 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:25.883828 master-0 kubenswrapper[7271]: I0313 10:54:25.883752 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:26.882634 master-0 kubenswrapper[7271]: I0313 10:54:26.882541 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:26.882634 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:26.882634 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:26.882634 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:26.882935 master-0 kubenswrapper[7271]: I0313 10:54:26.882658 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:27.882333 master-0 kubenswrapper[7271]: I0313 10:54:27.882286 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:27.882333 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:27.882333 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:27.882333 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:27.882960 master-0 kubenswrapper[7271]: I0313 10:54:27.882352 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:28.883173 master-0 kubenswrapper[7271]: I0313 10:54:28.883124 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:28.883173 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:28.883173 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:28.883173 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:28.883821 master-0 kubenswrapper[7271]: I0313 10:54:28.883187 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:29.882929 master-0 kubenswrapper[7271]: I0313 10:54:29.882838 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:29.882929 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:29.882929 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:29.882929 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:29.883655 master-0 kubenswrapper[7271]: I0313 10:54:29.882956 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:30.882759 master-0 kubenswrapper[7271]: I0313 10:54:30.882673 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:30.882759 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:30.882759 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:30.882759 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:30.882759 master-0 kubenswrapper[7271]: I0313 10:54:30.882754 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:31.882731 master-0 kubenswrapper[7271]: I0313 10:54:31.882660 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:31.882731 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:31.882731 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:31.882731 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:31.882731 master-0 kubenswrapper[7271]: I0313 10:54:31.882726 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:32.883947 master-0 kubenswrapper[7271]: I0313 10:54:32.883854 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:32.883947 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:32.883947 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:32.883947 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:32.883947 master-0 kubenswrapper[7271]: I0313 10:54:32.883941 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:33.881840 master-0 kubenswrapper[7271]: I0313 10:54:33.881788 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:33.881840 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:33.881840 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:33.881840 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:33.881840 master-0 kubenswrapper[7271]: I0313 10:54:33.881846 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:34.883125 master-0 kubenswrapper[7271]: I0313 10:54:34.883050 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:34.883125 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:34.883125 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:34.883125 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:34.884021 master-0 kubenswrapper[7271]: I0313 10:54:34.883151 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:35.882535 master-0 kubenswrapper[7271]: I0313 10:54:35.882470 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:35.882535 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:35.882535 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:35.882535 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:35.882857 master-0 kubenswrapper[7271]: I0313 10:54:35.882552 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:36.882917 master-0 kubenswrapper[7271]: I0313 10:54:36.882860 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:36.882917 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:36.882917 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:36.882917 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:36.883691 master-0 kubenswrapper[7271]: I0313 10:54:36.882929 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:37.882984 master-0 kubenswrapper[7271]: I0313 10:54:37.882913 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:37.882984 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:37.882984 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:37.882984 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:37.883807 master-0 kubenswrapper[7271]: I0313 10:54:37.882991 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:38.883571 master-0 kubenswrapper[7271]: I0313 10:54:38.883528 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:38.883571 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:38.883571 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:38.883571 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:38.884301 master-0 kubenswrapper[7271]: I0313 10:54:38.884268 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:39.883246 master-0 kubenswrapper[7271]: I0313 10:54:39.883165 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:39.883246 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:39.883246 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:39.883246 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:39.883606 master-0 kubenswrapper[7271]: I0313 10:54:39.883255 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:40.883179 master-0 kubenswrapper[7271]: I0313 10:54:40.883107 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:40.883179 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:40.883179 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:40.883179 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:40.883533 master-0 kubenswrapper[7271]: I0313 10:54:40.883193 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:41.884115 master-0 kubenswrapper[7271]: I0313 10:54:41.883962 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:41.884115 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:41.884115 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:41.884115 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:41.884115 master-0 kubenswrapper[7271]: I0313 10:54:41.884024 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:42.882660 master-0 kubenswrapper[7271]: I0313 10:54:42.882575 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:42.882660 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:42.882660 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:42.882660 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:42.882660 master-0 kubenswrapper[7271]: I0313 10:54:42.882664 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:43.882363 master-0 kubenswrapper[7271]: I0313 10:54:43.882269 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:43.882363 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:43.882363 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:43.882363 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:43.882363 master-0 kubenswrapper[7271]: I0313 10:54:43.882340 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:44.882827 master-0 kubenswrapper[7271]: I0313 10:54:44.882745 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:44.882827 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:44.882827 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:44.882827 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:44.882827 master-0 kubenswrapper[7271]: I0313 10:54:44.882816 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:45.882324 master-0 kubenswrapper[7271]: I0313 10:54:45.882272 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:45.882324 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:45.882324 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:45.882324 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:45.882667 master-0 kubenswrapper[7271]: I0313 10:54:45.882345 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:46.371121 master-0 kubenswrapper[7271]: I0313 10:54:46.371071 7271 scope.go:117] "RemoveContainer" containerID="6b13af5c026fed474c974bcb313df6134917562b08172b1ea3de528fea0a63e8" Mar 13 10:54:46.884524 master-0 kubenswrapper[7271]: I0313 10:54:46.884420 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:46.884524 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:46.884524 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:46.884524 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:46.885228 master-0 kubenswrapper[7271]: I0313 10:54:46.884541 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:47.882667 master-0 kubenswrapper[7271]: I0313 10:54:47.882574 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:47.882667 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:47.882667 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:47.882667 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:47.883383 master-0 kubenswrapper[7271]: I0313 10:54:47.882697 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:48.881954 master-0 kubenswrapper[7271]: I0313 10:54:48.881888 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:48.881954 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:48.881954 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:48.881954 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:48.881954 master-0 kubenswrapper[7271]: I0313 10:54:48.881944 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:49.705281 master-0 kubenswrapper[7271]: I0313 10:54:49.705230 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-c4xgk"] Mar 13 10:54:49.706236 master-0 kubenswrapper[7271]: E0313 10:54:49.706220 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8bdd05f-f920-4441-969f-336c85d2da57" containerName="installer" Mar 13 10:54:49.706306 master-0 kubenswrapper[7271]: I0313 10:54:49.706295 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8bdd05f-f920-4441-969f-336c85d2da57" containerName="installer" Mar 13 10:54:49.706480 master-0 kubenswrapper[7271]: I0313 10:54:49.706467 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8bdd05f-f920-4441-969f-336c85d2da57" containerName="installer" Mar 13 10:54:49.706979 master-0 kubenswrapper[7271]: I0313 10:54:49.706964 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:49.709551 master-0 kubenswrapper[7271]: I0313 10:54:49.709113 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 13 10:54:49.709734 master-0 kubenswrapper[7271]: I0313 10:54:49.709715 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-w85p5" Mar 13 10:54:49.737064 master-0 kubenswrapper[7271]: I0313 10:54:49.737001 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/63379812-ee71-46c4-96b2-731b2c9df5f4-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-c4xgk\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:49.838240 master-0 kubenswrapper[7271]: I0313 10:54:49.838187 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/63379812-ee71-46c4-96b2-731b2c9df5f4-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-c4xgk\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:49.838501 master-0 kubenswrapper[7271]: I0313 10:54:49.838272 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/63379812-ee71-46c4-96b2-731b2c9df5f4-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-c4xgk\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:49.838501 master-0 kubenswrapper[7271]: I0313 10:54:49.838308 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/63379812-ee71-46c4-96b2-731b2c9df5f4-ready\") pod \"cni-sysctl-allowlist-ds-c4xgk\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:49.838636 master-0 kubenswrapper[7271]: I0313 10:54:49.838482 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmxsb\" (UniqueName: \"kubernetes.io/projected/63379812-ee71-46c4-96b2-731b2c9df5f4-kube-api-access-lmxsb\") pod \"cni-sysctl-allowlist-ds-c4xgk\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:49.839566 master-0 kubenswrapper[7271]: I0313 10:54:49.839536 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/63379812-ee71-46c4-96b2-731b2c9df5f4-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-c4xgk\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:49.882835 master-0 kubenswrapper[7271]: I0313 10:54:49.882771 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:49.882835 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:49.882835 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:49.882835 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:49.883158 master-0 kubenswrapper[7271]: I0313 10:54:49.882847 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:49.939457 master-0 kubenswrapper[7271]: I0313 10:54:49.939380 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/63379812-ee71-46c4-96b2-731b2c9df5f4-ready\") pod \"cni-sysctl-allowlist-ds-c4xgk\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:49.939743 master-0 kubenswrapper[7271]: I0313 10:54:49.939567 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmxsb\" (UniqueName: \"kubernetes.io/projected/63379812-ee71-46c4-96b2-731b2c9df5f4-kube-api-access-lmxsb\") pod \"cni-sysctl-allowlist-ds-c4xgk\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:49.939743 master-0 kubenswrapper[7271]: I0313 10:54:49.939687 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/63379812-ee71-46c4-96b2-731b2c9df5f4-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-c4xgk\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:49.939893 master-0 kubenswrapper[7271]: I0313 10:54:49.939807 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/63379812-ee71-46c4-96b2-731b2c9df5f4-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-c4xgk\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:49.940007 master-0 kubenswrapper[7271]: I0313 10:54:49.939954 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/63379812-ee71-46c4-96b2-731b2c9df5f4-ready\") pod \"cni-sysctl-allowlist-ds-c4xgk\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:49.954093 master-0 kubenswrapper[7271]: I0313 10:54:49.954049 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmxsb\" (UniqueName: \"kubernetes.io/projected/63379812-ee71-46c4-96b2-731b2c9df5f4-kube-api-access-lmxsb\") pod \"cni-sysctl-allowlist-ds-c4xgk\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:50.030143 master-0 kubenswrapper[7271]: I0313 10:54:50.030007 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:50.050243 master-0 kubenswrapper[7271]: W0313 10:54:50.050190 7271 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63379812_ee71_46c4_96b2_731b2c9df5f4.slice/crio-9f7d850102949d5e6df6d118676255760fd83141db872c7e8b4f8ce7940515e9 WatchSource:0}: Error finding container 9f7d850102949d5e6df6d118676255760fd83141db872c7e8b4f8ce7940515e9: Status 404 returned error can't find the container with id 9f7d850102949d5e6df6d118676255760fd83141db872c7e8b4f8ce7940515e9 Mar 13 10:54:50.882124 master-0 kubenswrapper[7271]: I0313 10:54:50.882056 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:50.882124 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:50.882124 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:50.882124 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:50.882124 master-0 kubenswrapper[7271]: I0313 10:54:50.882117 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:51.000329 master-0 kubenswrapper[7271]: I0313 10:54:51.000249 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" event={"ID":"63379812-ee71-46c4-96b2-731b2c9df5f4","Type":"ContainerStarted","Data":"b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604"} Mar 13 10:54:51.000329 master-0 kubenswrapper[7271]: I0313 10:54:51.000311 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" event={"ID":"63379812-ee71-46c4-96b2-731b2c9df5f4","Type":"ContainerStarted","Data":"9f7d850102949d5e6df6d118676255760fd83141db872c7e8b4f8ce7940515e9"} Mar 13 10:54:51.000849 master-0 kubenswrapper[7271]: I0313 10:54:51.000777 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:51.023046 master-0 kubenswrapper[7271]: I0313 10:54:51.022990 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:54:51.028232 master-0 kubenswrapper[7271]: I0313 10:54:51.028154 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" podStartSLOduration=2.028132418 podStartE2EDuration="2.028132418s" podCreationTimestamp="2026-03-13 10:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:54:51.021744481 +0000 UTC m=+1145.548566881" watchObservedRunningTime="2026-03-13 10:54:51.028132418 +0000 UTC m=+1145.554954808" Mar 13 10:54:51.716752 master-0 kubenswrapper[7271]: I0313 10:54:51.716694 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-c4xgk"] Mar 13 10:54:51.882737 master-0 kubenswrapper[7271]: I0313 10:54:51.882691 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:51.882737 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:51.882737 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:51.882737 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:51.883323 master-0 kubenswrapper[7271]: I0313 10:54:51.882751 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:52.883244 master-0 kubenswrapper[7271]: I0313 10:54:52.883167 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:52.883244 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:52.883244 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:52.883244 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:52.883957 master-0 kubenswrapper[7271]: I0313 10:54:52.883270 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:53.016613 master-0 kubenswrapper[7271]: I0313 10:54:53.015744 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" podUID="63379812-ee71-46c4-96b2-731b2c9df5f4" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604" gracePeriod=30 Mar 13 10:54:53.882922 master-0 kubenswrapper[7271]: I0313 10:54:53.882859 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:53.882922 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:53.882922 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:53.882922 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:53.883242 master-0 kubenswrapper[7271]: I0313 10:54:53.882945 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:54.882159 master-0 kubenswrapper[7271]: I0313 10:54:54.882063 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:54.882159 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:54.882159 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:54.882159 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:54.882159 master-0 kubenswrapper[7271]: I0313 10:54:54.882142 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:55.883731 master-0 kubenswrapper[7271]: I0313 10:54:55.883576 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:55.883731 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:55.883731 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:55.883731 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:55.883731 master-0 kubenswrapper[7271]: I0313 10:54:55.883676 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:56.883083 master-0 kubenswrapper[7271]: I0313 10:54:56.883010 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:56.883083 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:56.883083 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:56.883083 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:56.883465 master-0 kubenswrapper[7271]: I0313 10:54:56.883117 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:57.882914 master-0 kubenswrapper[7271]: I0313 10:54:57.882845 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:57.882914 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:57.882914 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:57.882914 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:57.882914 master-0 kubenswrapper[7271]: I0313 10:54:57.882908 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:58.886079 master-0 kubenswrapper[7271]: I0313 10:54:58.885995 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:58.886079 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:58.886079 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:58.886079 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:58.886792 master-0 kubenswrapper[7271]: I0313 10:54:58.886098 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:54:59.883031 master-0 kubenswrapper[7271]: I0313 10:54:59.882974 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:54:59.883031 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:54:59.883031 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:54:59.883031 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:54:59.883417 master-0 kubenswrapper[7271]: I0313 10:54:59.883056 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:00.032781 master-0 kubenswrapper[7271]: E0313 10:55:00.032472 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:55:00.034174 master-0 kubenswrapper[7271]: E0313 10:55:00.034112 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:55:00.035510 master-0 kubenswrapper[7271]: E0313 10:55:00.035453 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:55:00.035590 master-0 kubenswrapper[7271]: E0313 10:55:00.035532 7271 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" podUID="63379812-ee71-46c4-96b2-731b2c9df5f4" containerName="kube-multus-additional-cni-plugins" Mar 13 10:55:00.882576 master-0 kubenswrapper[7271]: I0313 10:55:00.882525 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:00.882576 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:00.882576 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:00.882576 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:00.882576 master-0 kubenswrapper[7271]: I0313 10:55:00.882615 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:01.882889 master-0 kubenswrapper[7271]: I0313 10:55:01.882833 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:01.882889 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:01.882889 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:01.882889 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:01.883572 master-0 kubenswrapper[7271]: I0313 10:55:01.882909 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:02.882553 master-0 kubenswrapper[7271]: I0313 10:55:02.882477 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:02.882553 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:02.882553 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:02.882553 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:02.882983 master-0 kubenswrapper[7271]: I0313 10:55:02.882579 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:03.882714 master-0 kubenswrapper[7271]: I0313 10:55:03.882653 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:03.882714 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:03.882714 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:03.882714 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:03.883488 master-0 kubenswrapper[7271]: I0313 10:55:03.882729 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:04.882819 master-0 kubenswrapper[7271]: I0313 10:55:04.882749 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:04.882819 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:04.882819 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:04.882819 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:04.883432 master-0 kubenswrapper[7271]: I0313 10:55:04.882826 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:05.883232 master-0 kubenswrapper[7271]: I0313 10:55:05.883123 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:05.883232 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:05.883232 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:05.883232 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:05.883232 master-0 kubenswrapper[7271]: I0313 10:55:05.883232 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:06.882715 master-0 kubenswrapper[7271]: I0313 10:55:06.882650 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:06.882715 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:06.882715 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:06.882715 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:06.883016 master-0 kubenswrapper[7271]: I0313 10:55:06.882746 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:07.882835 master-0 kubenswrapper[7271]: I0313 10:55:07.882783 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:07.882835 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:07.882835 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:07.882835 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:07.884179 master-0 kubenswrapper[7271]: I0313 10:55:07.884149 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:08.882869 master-0 kubenswrapper[7271]: I0313 10:55:08.882791 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:08.882869 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:08.882869 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:08.882869 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:08.882869 master-0 kubenswrapper[7271]: I0313 10:55:08.882874 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:09.882049 master-0 kubenswrapper[7271]: I0313 10:55:09.881996 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:09.882049 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:09.882049 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:09.882049 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:09.882548 master-0 kubenswrapper[7271]: I0313 10:55:09.882514 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:10.032733 master-0 kubenswrapper[7271]: E0313 10:55:10.032674 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:55:10.033677 master-0 kubenswrapper[7271]: E0313 10:55:10.033636 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:55:10.035276 master-0 kubenswrapper[7271]: E0313 10:55:10.035246 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:55:10.035344 master-0 kubenswrapper[7271]: E0313 10:55:10.035283 7271 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" podUID="63379812-ee71-46c4-96b2-731b2c9df5f4" containerName="kube-multus-additional-cni-plugins" Mar 13 10:55:10.882291 master-0 kubenswrapper[7271]: I0313 10:55:10.882225 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:10.882291 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:10.882291 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:10.882291 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:10.882616 master-0 kubenswrapper[7271]: I0313 10:55:10.882297 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:11.881885 master-0 kubenswrapper[7271]: I0313 10:55:11.881820 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:11.881885 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:11.881885 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:11.881885 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:11.881885 master-0 kubenswrapper[7271]: I0313 10:55:11.881884 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:12.882943 master-0 kubenswrapper[7271]: I0313 10:55:12.882870 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:12.882943 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:12.882943 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:12.882943 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:12.883756 master-0 kubenswrapper[7271]: I0313 10:55:12.882961 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:13.883312 master-0 kubenswrapper[7271]: I0313 10:55:13.883257 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:13.883312 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:13.883312 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:13.883312 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:13.884060 master-0 kubenswrapper[7271]: I0313 10:55:13.883319 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:14.883171 master-0 kubenswrapper[7271]: I0313 10:55:14.883075 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:14.883171 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:14.883171 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:14.883171 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:14.883171 master-0 kubenswrapper[7271]: I0313 10:55:14.883169 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:15.882735 master-0 kubenswrapper[7271]: I0313 10:55:15.882672 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:15.882735 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:15.882735 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:15.882735 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:15.883056 master-0 kubenswrapper[7271]: I0313 10:55:15.882744 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:16.883441 master-0 kubenswrapper[7271]: I0313 10:55:16.883367 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:16.883441 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:16.883441 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:16.883441 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:16.883441 master-0 kubenswrapper[7271]: I0313 10:55:16.883437 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:17.883046 master-0 kubenswrapper[7271]: I0313 10:55:17.882976 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:17.883046 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:17.883046 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:17.883046 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:17.883046 master-0 kubenswrapper[7271]: I0313 10:55:17.883044 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:18.885531 master-0 kubenswrapper[7271]: I0313 10:55:18.885424 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:18.885531 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:18.885531 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:18.885531 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:18.886722 master-0 kubenswrapper[7271]: I0313 10:55:18.885545 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:19.883324 master-0 kubenswrapper[7271]: I0313 10:55:19.883242 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:19.883324 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:19.883324 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:19.883324 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:19.883973 master-0 kubenswrapper[7271]: I0313 10:55:19.883336 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:20.033291 master-0 kubenswrapper[7271]: E0313 10:55:20.033199 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:55:20.035670 master-0 kubenswrapper[7271]: E0313 10:55:20.035551 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:55:20.037663 master-0 kubenswrapper[7271]: E0313 10:55:20.037580 7271 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 10:55:20.037770 master-0 kubenswrapper[7271]: E0313 10:55:20.037660 7271 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" podUID="63379812-ee71-46c4-96b2-731b2c9df5f4" containerName="kube-multus-additional-cni-plugins" Mar 13 10:55:20.883020 master-0 kubenswrapper[7271]: I0313 10:55:20.882945 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:20.883020 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:20.883020 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:20.883020 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:20.883020 master-0 kubenswrapper[7271]: I0313 10:55:20.883022 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:21.886468 master-0 kubenswrapper[7271]: I0313 10:55:21.886401 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:21.886468 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:21.886468 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:21.886468 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:21.887180 master-0 kubenswrapper[7271]: I0313 10:55:21.886468 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:22.883743 master-0 kubenswrapper[7271]: I0313 10:55:22.883672 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:22.883743 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:22.883743 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:22.883743 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:22.884256 master-0 kubenswrapper[7271]: I0313 10:55:22.883767 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:23.147222 master-0 kubenswrapper[7271]: I0313 10:55:23.147110 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-c4xgk_63379812-ee71-46c4-96b2-731b2c9df5f4/kube-multus-additional-cni-plugins/0.log" Mar 13 10:55:23.147222 master-0 kubenswrapper[7271]: I0313 10:55:23.147202 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:55:23.225831 master-0 kubenswrapper[7271]: I0313 10:55:23.225787 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-c4xgk_63379812-ee71-46c4-96b2-731b2c9df5f4/kube-multus-additional-cni-plugins/0.log" Mar 13 10:55:23.225831 master-0 kubenswrapper[7271]: I0313 10:55:23.225836 7271 generic.go:334] "Generic (PLEG): container finished" podID="63379812-ee71-46c4-96b2-731b2c9df5f4" containerID="b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604" exitCode=137 Mar 13 10:55:23.226746 master-0 kubenswrapper[7271]: I0313 10:55:23.225909 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" event={"ID":"63379812-ee71-46c4-96b2-731b2c9df5f4","Type":"ContainerDied","Data":"b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604"} Mar 13 10:55:23.226746 master-0 kubenswrapper[7271]: I0313 10:55:23.225921 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" Mar 13 10:55:23.226746 master-0 kubenswrapper[7271]: I0313 10:55:23.225950 7271 scope.go:117] "RemoveContainer" containerID="b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604" Mar 13 10:55:23.226746 master-0 kubenswrapper[7271]: I0313 10:55:23.225938 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-c4xgk" event={"ID":"63379812-ee71-46c4-96b2-731b2c9df5f4","Type":"ContainerDied","Data":"9f7d850102949d5e6df6d118676255760fd83141db872c7e8b4f8ce7940515e9"} Mar 13 10:55:23.228318 master-0 kubenswrapper[7271]: I0313 10:55:23.228210 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/5.log" Mar 13 10:55:23.229299 master-0 kubenswrapper[7271]: I0313 10:55:23.229276 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/4.log" Mar 13 10:55:23.230339 master-0 kubenswrapper[7271]: I0313 10:55:23.230300 7271 generic.go:334] "Generic (PLEG): container finished" podID="7667717b-fb74-456b-8615-16475cb69e98" containerID="8f8b390ae6e4a037523aeb2d8c83e0584313e3d0ff96486ce09d9902d1586cb0" exitCode=1 Mar 13 10:55:23.230392 master-0 kubenswrapper[7271]: I0313 10:55:23.230335 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerDied","Data":"8f8b390ae6e4a037523aeb2d8c83e0584313e3d0ff96486ce09d9902d1586cb0"} Mar 13 10:55:23.231125 master-0 kubenswrapper[7271]: I0313 10:55:23.231100 7271 scope.go:117] "RemoveContainer" containerID="8f8b390ae6e4a037523aeb2d8c83e0584313e3d0ff96486ce09d9902d1586cb0" Mar 13 10:55:23.231411 master-0 kubenswrapper[7271]: E0313 10:55:23.231369 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:55:23.246568 master-0 kubenswrapper[7271]: I0313 10:55:23.246512 7271 scope.go:117] "RemoveContainer" containerID="b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604" Mar 13 10:55:23.247032 master-0 kubenswrapper[7271]: E0313 10:55:23.246992 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604\": container with ID starting with b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604 not found: ID does not exist" containerID="b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604" Mar 13 10:55:23.247091 master-0 kubenswrapper[7271]: I0313 10:55:23.247030 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604"} err="failed to get container status \"b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604\": rpc error: code = NotFound desc = could not find container \"b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604\": container with ID starting with b92af18240ade41ca40f3b0134da90a6dbe2e64d013cda1adb4b476b9f6d6604 not found: ID does not exist" Mar 13 10:55:23.247091 master-0 kubenswrapper[7271]: I0313 10:55:23.247051 7271 scope.go:117] "RemoveContainer" containerID="e612f73942ab70c1904fa8093204e01d65a250553aee680c1e4249be0f185d7a" Mar 13 10:55:23.299317 master-0 kubenswrapper[7271]: I0313 10:55:23.297275 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/63379812-ee71-46c4-96b2-731b2c9df5f4-cni-sysctl-allowlist\") pod \"63379812-ee71-46c4-96b2-731b2c9df5f4\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " Mar 13 10:55:23.299317 master-0 kubenswrapper[7271]: I0313 10:55:23.297349 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/63379812-ee71-46c4-96b2-731b2c9df5f4-tuning-conf-dir\") pod \"63379812-ee71-46c4-96b2-731b2c9df5f4\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " Mar 13 10:55:23.299317 master-0 kubenswrapper[7271]: I0313 10:55:23.297534 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63379812-ee71-46c4-96b2-731b2c9df5f4-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "63379812-ee71-46c4-96b2-731b2c9df5f4" (UID: "63379812-ee71-46c4-96b2-731b2c9df5f4"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:55:23.299317 master-0 kubenswrapper[7271]: I0313 10:55:23.297884 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63379812-ee71-46c4-96b2-731b2c9df5f4-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "63379812-ee71-46c4-96b2-731b2c9df5f4" (UID: "63379812-ee71-46c4-96b2-731b2c9df5f4"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:55:23.299317 master-0 kubenswrapper[7271]: I0313 10:55:23.298133 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/63379812-ee71-46c4-96b2-731b2c9df5f4-ready\") pod \"63379812-ee71-46c4-96b2-731b2c9df5f4\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " Mar 13 10:55:23.299317 master-0 kubenswrapper[7271]: I0313 10:55:23.298199 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmxsb\" (UniqueName: \"kubernetes.io/projected/63379812-ee71-46c4-96b2-731b2c9df5f4-kube-api-access-lmxsb\") pod \"63379812-ee71-46c4-96b2-731b2c9df5f4\" (UID: \"63379812-ee71-46c4-96b2-731b2c9df5f4\") " Mar 13 10:55:23.299317 master-0 kubenswrapper[7271]: I0313 10:55:23.298614 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63379812-ee71-46c4-96b2-731b2c9df5f4-ready" (OuterVolumeSpecName: "ready") pod "63379812-ee71-46c4-96b2-731b2c9df5f4" (UID: "63379812-ee71-46c4-96b2-731b2c9df5f4"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 10:55:23.299317 master-0 kubenswrapper[7271]: I0313 10:55:23.298870 7271 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/63379812-ee71-46c4-96b2-731b2c9df5f4-ready\") on node \"master-0\" DevicePath \"\"" Mar 13 10:55:23.299317 master-0 kubenswrapper[7271]: I0313 10:55:23.298895 7271 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/63379812-ee71-46c4-96b2-731b2c9df5f4-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 13 10:55:23.299317 master-0 kubenswrapper[7271]: I0313 10:55:23.298914 7271 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/63379812-ee71-46c4-96b2-731b2c9df5f4-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:55:23.308705 master-0 kubenswrapper[7271]: I0313 10:55:23.307428 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63379812-ee71-46c4-96b2-731b2c9df5f4-kube-api-access-lmxsb" (OuterVolumeSpecName: "kube-api-access-lmxsb") pod "63379812-ee71-46c4-96b2-731b2c9df5f4" (UID: "63379812-ee71-46c4-96b2-731b2c9df5f4"). InnerVolumeSpecName "kube-api-access-lmxsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:55:23.400214 master-0 kubenswrapper[7271]: I0313 10:55:23.400053 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmxsb\" (UniqueName: \"kubernetes.io/projected/63379812-ee71-46c4-96b2-731b2c9df5f4-kube-api-access-lmxsb\") on node \"master-0\" DevicePath \"\"" Mar 13 10:55:23.559678 master-0 kubenswrapper[7271]: I0313 10:55:23.559129 7271 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-c4xgk"] Mar 13 10:55:23.561957 master-0 kubenswrapper[7271]: I0313 10:55:23.561898 7271 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-c4xgk"] Mar 13 10:55:23.653513 master-0 kubenswrapper[7271]: I0313 10:55:23.653363 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63379812-ee71-46c4-96b2-731b2c9df5f4" path="/var/lib/kubelet/pods/63379812-ee71-46c4-96b2-731b2c9df5f4/volumes" Mar 13 10:55:23.882870 master-0 kubenswrapper[7271]: I0313 10:55:23.882811 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:23.882870 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:23.882870 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:23.882870 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:23.882870 master-0 kubenswrapper[7271]: I0313 10:55:23.882872 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:24.241048 master-0 kubenswrapper[7271]: I0313 10:55:24.240975 7271 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/5.log" Mar 13 10:55:24.882468 master-0 kubenswrapper[7271]: I0313 10:55:24.882408 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:24.882468 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:24.882468 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:24.882468 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:24.882835 master-0 kubenswrapper[7271]: I0313 10:55:24.882474 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:25.919550 master-0 kubenswrapper[7271]: I0313 10:55:25.919446 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:25.919550 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:25.919550 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:25.919550 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:25.919550 master-0 kubenswrapper[7271]: I0313 10:55:25.919508 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:26.000532 master-0 kubenswrapper[7271]: I0313 10:55:26.000462 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-6745c97c48-85rlf"] Mar 13 10:55:26.000863 master-0 kubenswrapper[7271]: E0313 10:55:26.000833 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63379812-ee71-46c4-96b2-731b2c9df5f4" containerName="kube-multus-additional-cni-plugins" Mar 13 10:55:26.000919 master-0 kubenswrapper[7271]: I0313 10:55:26.000866 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="63379812-ee71-46c4-96b2-731b2c9df5f4" containerName="kube-multus-additional-cni-plugins" Mar 13 10:55:26.001066 master-0 kubenswrapper[7271]: I0313 10:55:26.001042 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="63379812-ee71-46c4-96b2-731b2c9df5f4" containerName="kube-multus-additional-cni-plugins" Mar 13 10:55:26.002163 master-0 kubenswrapper[7271]: I0313 10:55:26.002133 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.004130 master-0 kubenswrapper[7271]: I0313 10:55:26.004090 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 13 10:55:26.004195 master-0 kubenswrapper[7271]: I0313 10:55:26.004089 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 13 10:55:26.008062 master-0 kubenswrapper[7271]: I0313 10:55:26.007965 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-98k6z" Mar 13 10:55:26.008169 master-0 kubenswrapper[7271]: I0313 10:55:26.007920 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 13 10:55:26.008257 master-0 kubenswrapper[7271]: I0313 10:55:26.008225 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 13 10:55:26.008523 master-0 kubenswrapper[7271]: I0313 10:55:26.008499 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 13 10:55:26.017755 master-0 kubenswrapper[7271]: I0313 10:55:26.016316 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 13 10:55:26.017755 master-0 kubenswrapper[7271]: I0313 10:55:26.017681 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-6745c97c48-85rlf"] Mar 13 10:55:26.036700 master-0 kubenswrapper[7271]: I0313 10:55:26.036268 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2q2f\" (UniqueName: \"kubernetes.io/projected/939a3da3-62e7-4376-853d-dc333465446c-kube-api-access-t2q2f\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.036700 master-0 kubenswrapper[7271]: I0313 10:55:26.036348 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-telemeter-client-tls\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.036700 master-0 kubenswrapper[7271]: I0313 10:55:26.036385 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.036700 master-0 kubenswrapper[7271]: I0313 10:55:26.036414 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.036700 master-0 kubenswrapper[7271]: I0313 10:55:26.036467 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-federate-client-tls\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.036700 master-0 kubenswrapper[7271]: I0313 10:55:26.036492 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.036700 master-0 kubenswrapper[7271]: I0313 10:55:26.036530 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-serving-certs-ca-bundle\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.036700 master-0 kubenswrapper[7271]: I0313 10:55:26.036633 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-metrics-client-ca\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.137395 master-0 kubenswrapper[7271]: I0313 10:55:26.137320 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-federate-client-tls\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.137395 master-0 kubenswrapper[7271]: I0313 10:55:26.137365 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.137395 master-0 kubenswrapper[7271]: I0313 10:55:26.137399 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-serving-certs-ca-bundle\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.137842 master-0 kubenswrapper[7271]: I0313 10:55:26.137433 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-metrics-client-ca\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.137842 master-0 kubenswrapper[7271]: I0313 10:55:26.137624 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2q2f\" (UniqueName: \"kubernetes.io/projected/939a3da3-62e7-4376-853d-dc333465446c-kube-api-access-t2q2f\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.137842 master-0 kubenswrapper[7271]: I0313 10:55:26.137666 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-telemeter-client-tls\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.137842 master-0 kubenswrapper[7271]: I0313 10:55:26.137687 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.138196 master-0 kubenswrapper[7271]: I0313 10:55:26.138145 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.140342 master-0 kubenswrapper[7271]: I0313 10:55:26.138638 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-serving-certs-ca-bundle\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.140342 master-0 kubenswrapper[7271]: I0313 10:55:26.139468 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.140342 master-0 kubenswrapper[7271]: I0313 10:55:26.139795 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-metrics-client-ca\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.140621 master-0 kubenswrapper[7271]: I0313 10:55:26.140562 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.141170 master-0 kubenswrapper[7271]: I0313 10:55:26.141130 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-telemeter-client-tls\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.143237 master-0 kubenswrapper[7271]: I0313 10:55:26.143197 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.143712 master-0 kubenswrapper[7271]: I0313 10:55:26.143639 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-federate-client-tls\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.154762 master-0 kubenswrapper[7271]: I0313 10:55:26.154729 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2q2f\" (UniqueName: \"kubernetes.io/projected/939a3da3-62e7-4376-853d-dc333465446c-kube-api-access-t2q2f\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.329060 master-0 kubenswrapper[7271]: I0313 10:55:26.328974 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:55:26.732866 master-0 kubenswrapper[7271]: I0313 10:55:26.732693 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-6745c97c48-85rlf"] Mar 13 10:55:26.883901 master-0 kubenswrapper[7271]: I0313 10:55:26.883818 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:26.883901 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:26.883901 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:26.883901 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:26.884755 master-0 kubenswrapper[7271]: I0313 10:55:26.884722 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:27.269236 master-0 kubenswrapper[7271]: I0313 10:55:27.269047 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" event={"ID":"939a3da3-62e7-4376-853d-dc333465446c","Type":"ContainerStarted","Data":"e01c7e10f931b9cb63a3c6e8ff2acd27c52f8bd5121303551b964879d91d68bc"} Mar 13 10:55:27.269236 master-0 kubenswrapper[7271]: I0313 10:55:27.269120 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" event={"ID":"939a3da3-62e7-4376-853d-dc333465446c","Type":"ContainerStarted","Data":"1336242474e613173ec68e217f7c2525d410f6c9d6177b623e5fd724baa8f4a8"} Mar 13 10:55:27.269236 master-0 kubenswrapper[7271]: I0313 10:55:27.269131 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" event={"ID":"939a3da3-62e7-4376-853d-dc333465446c","Type":"ContainerStarted","Data":"55a1d5837522d7e01234fc3a1e97db7f263a7fc3afa44798b92dc0468688e3b2"} Mar 13 10:55:27.269236 master-0 kubenswrapper[7271]: I0313 10:55:27.269140 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" event={"ID":"939a3da3-62e7-4376-853d-dc333465446c","Type":"ContainerStarted","Data":"7cf1f1393ed4dc75d53053e58fde65a2d67118e8d37c0361a92ae7802d8b760d"} Mar 13 10:55:27.883800 master-0 kubenswrapper[7271]: I0313 10:55:27.883754 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:27.883800 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:27.883800 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:27.883800 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:27.884235 master-0 kubenswrapper[7271]: I0313 10:55:27.884206 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:28.883282 master-0 kubenswrapper[7271]: I0313 10:55:28.883181 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:28.883282 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:28.883282 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:28.883282 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:28.884722 master-0 kubenswrapper[7271]: I0313 10:55:28.884667 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:29.881772 master-0 kubenswrapper[7271]: I0313 10:55:29.881652 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:29.881772 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:29.881772 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:29.881772 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:29.882416 master-0 kubenswrapper[7271]: I0313 10:55:29.881835 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:30.882528 master-0 kubenswrapper[7271]: I0313 10:55:30.882447 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:30.882528 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:30.882528 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:30.882528 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:30.883289 master-0 kubenswrapper[7271]: I0313 10:55:30.882616 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:31.884216 master-0 kubenswrapper[7271]: I0313 10:55:31.884091 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:31.884216 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:31.884216 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:31.884216 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:31.884216 master-0 kubenswrapper[7271]: I0313 10:55:31.884204 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:32.884036 master-0 kubenswrapper[7271]: I0313 10:55:32.883955 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:32.884036 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:32.884036 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:32.884036 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:32.884718 master-0 kubenswrapper[7271]: I0313 10:55:32.884073 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:33.883280 master-0 kubenswrapper[7271]: I0313 10:55:33.883189 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:33.883280 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:33.883280 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:33.883280 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:33.883280 master-0 kubenswrapper[7271]: I0313 10:55:33.883257 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:34.882724 master-0 kubenswrapper[7271]: I0313 10:55:34.882680 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:34.882724 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:34.882724 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:34.882724 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:34.883356 master-0 kubenswrapper[7271]: I0313 10:55:34.883332 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:35.649148 master-0 kubenswrapper[7271]: I0313 10:55:35.649098 7271 scope.go:117] "RemoveContainer" containerID="8f8b390ae6e4a037523aeb2d8c83e0584313e3d0ff96486ce09d9902d1586cb0" Mar 13 10:55:35.649391 master-0 kubenswrapper[7271]: E0313 10:55:35.649340 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:55:35.686867 master-0 kubenswrapper[7271]: I0313 10:55:35.686731 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" podStartSLOduration=10.686711981 podStartE2EDuration="10.686711981s" podCreationTimestamp="2026-03-13 10:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:55:27.30019183 +0000 UTC m=+1181.827014240" watchObservedRunningTime="2026-03-13 10:55:35.686711981 +0000 UTC m=+1190.213534371" Mar 13 10:55:35.883640 master-0 kubenswrapper[7271]: I0313 10:55:35.883539 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:35.883640 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:35.883640 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:35.883640 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:35.884702 master-0 kubenswrapper[7271]: I0313 10:55:35.883653 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:36.882856 master-0 kubenswrapper[7271]: I0313 10:55:36.882781 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:36.882856 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:36.882856 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:36.882856 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:36.883211 master-0 kubenswrapper[7271]: I0313 10:55:36.882877 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:37.883018 master-0 kubenswrapper[7271]: I0313 10:55:37.882936 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:37.883018 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:37.883018 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:37.883018 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:37.883789 master-0 kubenswrapper[7271]: I0313 10:55:37.883042 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:38.883502 master-0 kubenswrapper[7271]: I0313 10:55:38.883406 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:38.883502 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:38.883502 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:38.883502 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:38.884570 master-0 kubenswrapper[7271]: I0313 10:55:38.883543 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:39.882710 master-0 kubenswrapper[7271]: I0313 10:55:39.882653 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:39.882710 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:39.882710 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:39.882710 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:39.883472 master-0 kubenswrapper[7271]: I0313 10:55:39.882811 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:40.883610 master-0 kubenswrapper[7271]: I0313 10:55:40.883521 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:40.883610 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:40.883610 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:40.883610 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:40.884216 master-0 kubenswrapper[7271]: I0313 10:55:40.883658 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:41.883828 master-0 kubenswrapper[7271]: I0313 10:55:41.883746 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:41.883828 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:41.883828 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:41.883828 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:41.883828 master-0 kubenswrapper[7271]: I0313 10:55:41.883829 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:42.882228 master-0 kubenswrapper[7271]: I0313 10:55:42.882165 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:42.882228 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:42.882228 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:42.882228 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:42.882228 master-0 kubenswrapper[7271]: I0313 10:55:42.882222 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:43.883106 master-0 kubenswrapper[7271]: I0313 10:55:43.883047 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:43.883106 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:43.883106 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:43.883106 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:43.883106 master-0 kubenswrapper[7271]: I0313 10:55:43.883107 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:44.883021 master-0 kubenswrapper[7271]: I0313 10:55:44.882884 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:44.883021 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:44.883021 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:44.883021 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:44.883651 master-0 kubenswrapper[7271]: I0313 10:55:44.883064 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:45.883636 master-0 kubenswrapper[7271]: I0313 10:55:45.883493 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:45.883636 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:45.883636 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:45.883636 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:45.883636 master-0 kubenswrapper[7271]: I0313 10:55:45.883565 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:47.185156 master-0 kubenswrapper[7271]: I0313 10:55:47.185081 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:47.185156 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:47.185156 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:47.185156 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:47.185156 master-0 kubenswrapper[7271]: I0313 10:55:47.185159 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:47.883104 master-0 kubenswrapper[7271]: I0313 10:55:47.883037 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:47.883104 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:47.883104 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:47.883104 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:47.883104 master-0 kubenswrapper[7271]: I0313 10:55:47.883106 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:48.883190 master-0 kubenswrapper[7271]: I0313 10:55:48.883093 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:48.883190 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:48.883190 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:48.883190 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:48.883976 master-0 kubenswrapper[7271]: I0313 10:55:48.883219 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:49.646209 master-0 kubenswrapper[7271]: I0313 10:55:49.646059 7271 scope.go:117] "RemoveContainer" containerID="8f8b390ae6e4a037523aeb2d8c83e0584313e3d0ff96486ce09d9902d1586cb0" Mar 13 10:55:49.646806 master-0 kubenswrapper[7271]: E0313 10:55:49.646383 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:55:49.883073 master-0 kubenswrapper[7271]: I0313 10:55:49.882972 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:49.883073 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:49.883073 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:49.883073 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:49.883073 master-0 kubenswrapper[7271]: I0313 10:55:49.883056 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:50.883374 master-0 kubenswrapper[7271]: I0313 10:55:50.883251 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:50.883374 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:50.883374 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:50.883374 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:50.884108 master-0 kubenswrapper[7271]: I0313 10:55:50.883380 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:51.883484 master-0 kubenswrapper[7271]: I0313 10:55:51.883402 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:51.883484 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:51.883484 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:51.883484 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:51.884164 master-0 kubenswrapper[7271]: I0313 10:55:51.883501 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:52.883673 master-0 kubenswrapper[7271]: I0313 10:55:52.883558 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:52.883673 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:52.883673 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:52.883673 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:52.883673 master-0 kubenswrapper[7271]: I0313 10:55:52.883640 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:53.882521 master-0 kubenswrapper[7271]: I0313 10:55:53.882469 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:53.882521 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:53.882521 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:53.882521 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:53.882878 master-0 kubenswrapper[7271]: I0313 10:55:53.882541 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:54.883253 master-0 kubenswrapper[7271]: I0313 10:55:54.883167 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:54.883253 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:54.883253 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:54.883253 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:54.884216 master-0 kubenswrapper[7271]: I0313 10:55:54.883293 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:55.883607 master-0 kubenswrapper[7271]: I0313 10:55:55.883492 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:55:55.883607 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:55:55.883607 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:55:55.883607 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:55:55.884460 master-0 kubenswrapper[7271]: I0313 10:55:55.883613 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:55:55.884460 master-0 kubenswrapper[7271]: I0313 10:55:55.883670 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:55:55.884546 master-0 kubenswrapper[7271]: I0313 10:55:55.884456 7271 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"05d3fbc3bee5182c5f9073fc6b00e89e15ad18832a082318ea3763b5cb1e923e"} pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" containerMessage="Container router failed startup probe, will be restarted" Mar 13 10:55:55.884546 master-0 kubenswrapper[7271]: I0313 10:55:55.884502 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" containerID="cri-o://05d3fbc3bee5182c5f9073fc6b00e89e15ad18832a082318ea3763b5cb1e923e" gracePeriod=3600 Mar 13 10:56:04.646151 master-0 kubenswrapper[7271]: I0313 10:56:04.646051 7271 scope.go:117] "RemoveContainer" containerID="8f8b390ae6e4a037523aeb2d8c83e0584313e3d0ff96486ce09d9902d1586cb0" Mar 13 10:56:04.647046 master-0 kubenswrapper[7271]: E0313 10:56:04.646353 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:56:13.004851 master-0 kubenswrapper[7271]: I0313 10:56:13.004683 7271 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 10:56:13.008848 master-0 kubenswrapper[7271]: I0313 10:56:13.008760 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:13.014727 master-0 kubenswrapper[7271]: I0313 10:56:13.014465 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 10:56:13.016942 master-0 kubenswrapper[7271]: I0313 10:56:13.016867 7271 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 10:56:13.018697 master-0 kubenswrapper[7271]: I0313 10:56:13.017373 7271 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-525r2" Mar 13 10:56:13.079207 master-0 kubenswrapper[7271]: I0313 10:56:13.079128 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-var-lock\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:13.079207 master-0 kubenswrapper[7271]: I0313 10:56:13.079218 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:13.079491 master-0 kubenswrapper[7271]: I0313 10:56:13.079274 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:13.180572 master-0 kubenswrapper[7271]: I0313 10:56:13.180487 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-var-lock\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:13.180844 master-0 kubenswrapper[7271]: I0313 10:56:13.180661 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-var-lock\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:13.180844 master-0 kubenswrapper[7271]: I0313 10:56:13.180769 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:13.180983 master-0 kubenswrapper[7271]: I0313 10:56:13.180952 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:13.181594 master-0 kubenswrapper[7271]: I0313 10:56:13.181551 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:13.214641 master-0 kubenswrapper[7271]: I0313 10:56:13.214526 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:13.342383 master-0 kubenswrapper[7271]: I0313 10:56:13.342249 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:13.747706 master-0 kubenswrapper[7271]: I0313 10:56:13.747634 7271 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 10:56:14.414244 master-0 kubenswrapper[7271]: I0313 10:56:14.414172 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"533638d2-44ce-4cf8-aa47-a6b89c94621d","Type":"ContainerStarted","Data":"fe9e59028a5e05ef377e39eb4fc61f98da9b8df986b802547501f57b158fbf17"} Mar 13 10:56:14.414244 master-0 kubenswrapper[7271]: I0313 10:56:14.414242 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"533638d2-44ce-4cf8-aa47-a6b89c94621d","Type":"ContainerStarted","Data":"e2ab44048b41e7d6482d53b636df4ef12bcf58ac194024559096f0e679ffee57"} Mar 13 10:56:14.431105 master-0 kubenswrapper[7271]: I0313 10:56:14.430817 7271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.430781027 podStartE2EDuration="2.430781027s" podCreationTimestamp="2026-03-13 10:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:56:14.429391989 +0000 UTC m=+1228.956214369" watchObservedRunningTime="2026-03-13 10:56:14.430781027 +0000 UTC m=+1228.957603417" Mar 13 10:56:18.646242 master-0 kubenswrapper[7271]: I0313 10:56:18.646028 7271 scope.go:117] "RemoveContainer" containerID="8f8b390ae6e4a037523aeb2d8c83e0584313e3d0ff96486ce09d9902d1586cb0" Mar 13 10:56:18.647508 master-0 kubenswrapper[7271]: E0313 10:56:18.646616 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:56:29.645520 master-0 kubenswrapper[7271]: I0313 10:56:29.645459 7271 scope.go:117] "RemoveContainer" containerID="8f8b390ae6e4a037523aeb2d8c83e0584313e3d0ff96486ce09d9902d1586cb0" Mar 13 10:56:29.646088 master-0 kubenswrapper[7271]: E0313 10:56:29.645745 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:56:42.632640 master-0 kubenswrapper[7271]: I0313 10:56:42.632528 7271 generic.go:334] "Generic (PLEG): container finished" podID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerID="05d3fbc3bee5182c5f9073fc6b00e89e15ad18832a082318ea3763b5cb1e923e" exitCode=0 Mar 13 10:56:42.632640 master-0 kubenswrapper[7271]: I0313 10:56:42.632612 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" event={"ID":"eb778c86-ea51-4eab-82b8-a8e0bec0f050","Type":"ContainerDied","Data":"05d3fbc3bee5182c5f9073fc6b00e89e15ad18832a082318ea3763b5cb1e923e"} Mar 13 10:56:42.632640 master-0 kubenswrapper[7271]: I0313 10:56:42.632654 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" event={"ID":"eb778c86-ea51-4eab-82b8-a8e0bec0f050","Type":"ContainerStarted","Data":"62cce4132e0b6eb5fc7753f1f583944701d615691adef8e12b4c1c0ec8facfec"} Mar 13 10:56:42.633386 master-0 kubenswrapper[7271]: I0313 10:56:42.632680 7271 scope.go:117] "RemoveContainer" containerID="0b24465f9fb9d577096620c9e6db8d758ee4126ca9a38d85dbec64c888ecae41" Mar 13 10:56:42.648616 master-0 kubenswrapper[7271]: I0313 10:56:42.648540 7271 scope.go:117] "RemoveContainer" containerID="8f8b390ae6e4a037523aeb2d8c83e0584313e3d0ff96486ce09d9902d1586cb0" Mar 13 10:56:42.649048 master-0 kubenswrapper[7271]: E0313 10:56:42.648996 7271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-tzd9b_openshift-ingress-operator(7667717b-fb74-456b-8615-16475cb69e98)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" podUID="7667717b-fb74-456b-8615-16475cb69e98" Mar 13 10:56:42.880245 master-0 kubenswrapper[7271]: I0313 10:56:42.880143 7271 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:56:42.880245 master-0 kubenswrapper[7271]: I0313 10:56:42.880226 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:56:42.882877 master-0 kubenswrapper[7271]: I0313 10:56:42.882795 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:56:42.882877 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:56:42.882877 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:56:42.882877 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:56:42.882877 master-0 kubenswrapper[7271]: I0313 10:56:42.882844 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:56:43.883148 master-0 kubenswrapper[7271]: I0313 10:56:43.883006 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:56:43.883148 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:56:43.883148 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:56:43.883148 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:56:43.883148 master-0 kubenswrapper[7271]: I0313 10:56:43.883084 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:56:44.882633 master-0 kubenswrapper[7271]: I0313 10:56:44.882567 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:56:44.882633 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:56:44.882633 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:56:44.882633 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:56:44.882923 master-0 kubenswrapper[7271]: I0313 10:56:44.882636 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:56:45.883699 master-0 kubenswrapper[7271]: I0313 10:56:45.883574 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:56:45.883699 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:56:45.883699 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:56:45.883699 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:56:45.884543 master-0 kubenswrapper[7271]: I0313 10:56:45.883749 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:56:46.882516 master-0 kubenswrapper[7271]: I0313 10:56:46.882447 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:56:46.882516 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:56:46.882516 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:56:46.882516 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:56:46.882867 master-0 kubenswrapper[7271]: I0313 10:56:46.882520 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:56:47.246613 master-0 kubenswrapper[7271]: I0313 10:56:47.246462 7271 scope.go:117] "RemoveContainer" containerID="d50222c619a1beb462f2ff2c50918ed3814098cfb9ee8c852270a8c209a51384" Mar 13 10:56:47.884224 master-0 kubenswrapper[7271]: I0313 10:56:47.884159 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:56:47.884224 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:56:47.884224 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:56:47.884224 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:56:47.884647 master-0 kubenswrapper[7271]: I0313 10:56:47.884255 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:56:48.883098 master-0 kubenswrapper[7271]: I0313 10:56:48.882974 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:56:48.883098 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:56:48.883098 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:56:48.883098 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:56:48.883098 master-0 kubenswrapper[7271]: I0313 10:56:48.883079 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:56:49.883139 master-0 kubenswrapper[7271]: I0313 10:56:49.883069 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:56:49.883139 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:56:49.883139 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:56:49.883139 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:56:49.883139 master-0 kubenswrapper[7271]: I0313 10:56:49.883139 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:56:50.882861 master-0 kubenswrapper[7271]: I0313 10:56:50.882747 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:56:50.882861 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:56:50.882861 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:56:50.882861 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:56:50.882861 master-0 kubenswrapper[7271]: I0313 10:56:50.882829 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:56:51.844083 master-0 kubenswrapper[7271]: I0313 10:56:51.843985 7271 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 10:56:51.845180 master-0 kubenswrapper[7271]: I0313 10:56:51.845139 7271 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 10:56:51.845382 master-0 kubenswrapper[7271]: I0313 10:56:51.845327 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:51.845486 master-0 kubenswrapper[7271]: I0313 10:56:51.845440 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" containerID="cri-o://ade40bcd87bcb5b50e27312debdd70388bd7803a0fa485aae78b3cece367b239" gracePeriod=15 Mar 13 10:56:51.845566 master-0 kubenswrapper[7271]: I0313 10:56:51.845511 7271 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0a02a1eb2e8e166b8ba4ad221ecbd690f6cbf9e334b441e5c5096bb8f331c40f" gracePeriod=15 Mar 13 10:56:51.846663 master-0 kubenswrapper[7271]: I0313 10:56:51.846073 7271 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 10:56:51.846663 master-0 kubenswrapper[7271]: E0313 10:56:51.846402 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 10:56:51.846663 master-0 kubenswrapper[7271]: I0313 10:56:51.846416 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 10:56:51.846663 master-0 kubenswrapper[7271]: E0313 10:56:51.846433 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 10:56:51.846663 master-0 kubenswrapper[7271]: I0313 10:56:51.846438 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 10:56:51.846663 master-0 kubenswrapper[7271]: E0313 10:56:51.846451 7271 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 10:56:51.846663 master-0 kubenswrapper[7271]: I0313 10:56:51.846456 7271 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 10:56:51.846663 master-0 kubenswrapper[7271]: I0313 10:56:51.846602 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 10:56:51.846663 master-0 kubenswrapper[7271]: I0313 10:56:51.846623 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 10:56:51.846663 master-0 kubenswrapper[7271]: I0313 10:56:51.846641 7271 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 10:56:51.848345 master-0 kubenswrapper[7271]: I0313 10:56:51.848306 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:51.883146 master-0 kubenswrapper[7271]: I0313 10:56:51.883101 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:56:51.883146 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:56:51.883146 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:56:51.883146 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:56:51.883868 master-0 kubenswrapper[7271]: I0313 10:56:51.883160 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:56:51.920224 master-0 kubenswrapper[7271]: E0313 10:56:51.920189 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:56:51.920533 master-0 kubenswrapper[7271]: E0313 10:56:51.920514 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:56:51.921053 master-0 kubenswrapper[7271]: E0313 10:56:51.920999 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:56:51.921764 master-0 kubenswrapper[7271]: E0313 10:56:51.921727 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:56:51.922190 master-0 kubenswrapper[7271]: E0313 10:56:51.922159 7271 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:56:51.922252 master-0 kubenswrapper[7271]: I0313 10:56:51.922194 7271 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 10:56:51.922648 master-0 kubenswrapper[7271]: E0313 10:56:51.922617 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 13 10:56:51.926796 master-0 kubenswrapper[7271]: E0313 10:56:51.926737 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:51.928167 master-0 kubenswrapper[7271]: E0313 10:56:51.928109 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:52.011895 master-0 kubenswrapper[7271]: I0313 10:56:52.011837 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.011895 master-0 kubenswrapper[7271]: I0313 10:56:52.011898 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:52.012215 master-0 kubenswrapper[7271]: I0313 10:56:52.011927 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.012215 master-0 kubenswrapper[7271]: I0313 10:56:52.011949 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.012215 master-0 kubenswrapper[7271]: I0313 10:56:52.012026 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.012215 master-0 kubenswrapper[7271]: I0313 10:56:52.012048 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:52.012215 master-0 kubenswrapper[7271]: I0313 10:56:52.012084 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:52.012215 master-0 kubenswrapper[7271]: I0313 10:56:52.012110 7271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.113501 master-0 kubenswrapper[7271]: I0313 10:56:52.113355 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.113501 master-0 kubenswrapper[7271]: I0313 10:56:52.113395 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:52.113501 master-0 kubenswrapper[7271]: I0313 10:56:52.113425 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:52.113501 master-0 kubenswrapper[7271]: I0313 10:56:52.113451 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.113501 master-0 kubenswrapper[7271]: I0313 10:56:52.113468 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.113501 master-0 kubenswrapper[7271]: I0313 10:56:52.113491 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:52.113501 master-0 kubenswrapper[7271]: I0313 10:56:52.113510 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:52.114024 master-0 kubenswrapper[7271]: I0313 10:56:52.113533 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.114024 master-0 kubenswrapper[7271]: I0313 10:56:52.113549 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.114024 master-0 kubenswrapper[7271]: I0313 10:56:52.113570 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.114024 master-0 kubenswrapper[7271]: I0313 10:56:52.113624 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:52.114024 master-0 kubenswrapper[7271]: I0313 10:56:52.113648 7271 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.114024 master-0 kubenswrapper[7271]: I0313 10:56:52.113664 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.114024 master-0 kubenswrapper[7271]: I0313 10:56:52.113723 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.114024 master-0 kubenswrapper[7271]: I0313 10:56:52.113733 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:52.114024 master-0 kubenswrapper[7271]: I0313 10:56:52.113776 7271 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.124315 master-0 kubenswrapper[7271]: E0313 10:56:52.124281 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 13 10:56:52.228669 master-0 kubenswrapper[7271]: I0313 10:56:52.228552 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.229498 master-0 kubenswrapper[7271]: I0313 10:56:52.229449 7271 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:52.259064 master-0 kubenswrapper[7271]: E0313 10:56:52.258897 7271 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189c615e9c81db15 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:077dd10388b9e3e48a07382126e86621,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:56:52.257708821 +0000 UTC m=+1266.784531211,LastTimestamp:2026-03-13 10:56:52.257708821 +0000 UTC m=+1266.784531211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:56:52.528766 master-0 kubenswrapper[7271]: E0313 10:56:52.526872 7271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 13 10:56:52.723137 master-0 kubenswrapper[7271]: I0313 10:56:52.722990 7271 generic.go:334] "Generic (PLEG): container finished" podID="533638d2-44ce-4cf8-aa47-a6b89c94621d" containerID="fe9e59028a5e05ef377e39eb4fc61f98da9b8df986b802547501f57b158fbf17" exitCode=0 Mar 13 10:56:52.723137 master-0 kubenswrapper[7271]: I0313 10:56:52.723058 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"533638d2-44ce-4cf8-aa47-a6b89c94621d","Type":"ContainerDied","Data":"fe9e59028a5e05ef377e39eb4fc61f98da9b8df986b802547501f57b158fbf17"} Mar 13 10:56:52.724333 master-0 kubenswrapper[7271]: I0313 10:56:52.724277 7271 status_manager.go:851] "Failed to get status for pod" podUID="533638d2-44ce-4cf8-aa47-a6b89c94621d" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:56:52.725198 master-0 kubenswrapper[7271]: I0313 10:56:52.725171 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"a6907ade1777d6a7c993aeb23acaeb6fdd891b625a9b035210953700ede72f63"} Mar 13 10:56:52.725276 master-0 kubenswrapper[7271]: I0313 10:56:52.725199 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"3b9f539be02f519c82f90f79644538b0615d221de57b1fd6c7c4726d8ebe602e"} Mar 13 10:56:52.726271 master-0 kubenswrapper[7271]: I0313 10:56:52.726213 7271 status_manager.go:851] "Failed to get status for pod" podUID="533638d2-44ce-4cf8-aa47-a6b89c94621d" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:56:52.726427 master-0 kubenswrapper[7271]: E0313 10:56:52.726367 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:52.728271 master-0 kubenswrapper[7271]: I0313 10:56:52.728246 7271 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="0a02a1eb2e8e166b8ba4ad221ecbd690f6cbf9e334b441e5c5096bb8f331c40f" exitCode=0 Mar 13 10:56:52.730567 master-0 kubenswrapper[7271]: I0313 10:56:52.730536 7271 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="471db2ea2205b6f1d3a5586cdbba3aa6c38a4e80fcf269848ce63dabe96030ca" exitCode=0 Mar 13 10:56:52.730567 master-0 kubenswrapper[7271]: I0313 10:56:52.730561 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerDied","Data":"471db2ea2205b6f1d3a5586cdbba3aa6c38a4e80fcf269848ce63dabe96030ca"} Mar 13 10:56:52.730688 master-0 kubenswrapper[7271]: I0313 10:56:52.730581 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"c797020833454d5ed2c33acc860a0f30fce513778328e3b025208a981e1fff3f"} Mar 13 10:56:52.731455 master-0 kubenswrapper[7271]: E0313 10:56:52.731420 7271 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:52.731519 master-0 kubenswrapper[7271]: I0313 10:56:52.731437 7271 status_manager.go:851] "Failed to get status for pod" podUID="533638d2-44ce-4cf8-aa47-a6b89c94621d" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:56:52.882036 master-0 kubenswrapper[7271]: I0313 10:56:52.881975 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:56:52.882036 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:56:52.882036 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:56:52.882036 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:56:52.882036 master-0 kubenswrapper[7271]: I0313 10:56:52.882030 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:56:53.744819 master-0 kubenswrapper[7271]: I0313 10:56:53.743789 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"87ad10810594d3a8b47e2d2f0ea99d2d22bd3431702cd859d74cd9c630e59378"} Mar 13 10:56:53.744819 master-0 kubenswrapper[7271]: I0313 10:56:53.743853 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"1ecea7b8d5e69133ad13b5b777fbc920ab41ff3583523c6b2276f6193ca1bf07"} Mar 13 10:56:53.744819 master-0 kubenswrapper[7271]: I0313 10:56:53.743865 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"78a5083f0f4488ca7c6e4d90cf72bc643a68b4410b27b3743964b73f858c2984"} Mar 13 10:56:53.904220 master-0 kubenswrapper[7271]: I0313 10:56:53.896732 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:56:53.904220 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:56:53.904220 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:56:53.904220 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:56:53.904220 master-0 kubenswrapper[7271]: I0313 10:56:53.896807 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:56:54.251052 master-0 kubenswrapper[7271]: I0313 10:56:54.251010 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:54.256263 master-0 kubenswrapper[7271]: I0313 10:56:54.256233 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:56:54.345701 master-0 kubenswrapper[7271]: I0313 10:56:54.345495 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-kubelet-dir\") pod \"533638d2-44ce-4cf8-aa47-a6b89c94621d\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " Mar 13 10:56:54.345701 master-0 kubenswrapper[7271]: I0313 10:56:54.345570 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 13 10:56:54.345701 master-0 kubenswrapper[7271]: I0313 10:56:54.345668 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-var-lock\") pod \"533638d2-44ce-4cf8-aa47-a6b89c94621d\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " Mar 13 10:56:54.345701 master-0 kubenswrapper[7271]: I0313 10:56:54.345671 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "533638d2-44ce-4cf8-aa47-a6b89c94621d" (UID: "533638d2-44ce-4cf8-aa47-a6b89c94621d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:56:54.346133 master-0 kubenswrapper[7271]: I0313 10:56:54.345663 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:56:54.346133 master-0 kubenswrapper[7271]: I0313 10:56:54.345708 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 13 10:56:54.346133 master-0 kubenswrapper[7271]: I0313 10:56:54.345752 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-var-lock" (OuterVolumeSpecName: "var-lock") pod "533638d2-44ce-4cf8-aa47-a6b89c94621d" (UID: "533638d2-44ce-4cf8-aa47-a6b89c94621d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:56:54.346133 master-0 kubenswrapper[7271]: I0313 10:56:54.345782 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets" (OuterVolumeSpecName: "secrets") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:56:54.346133 master-0 kubenswrapper[7271]: I0313 10:56:54.345812 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 13 10:56:54.346133 master-0 kubenswrapper[7271]: I0313 10:56:54.345865 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config" (OuterVolumeSpecName: "config") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:56:54.346133 master-0 kubenswrapper[7271]: I0313 10:56:54.345897 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"533638d2-44ce-4cf8-aa47-a6b89c94621d\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " Mar 13 10:56:54.346133 master-0 kubenswrapper[7271]: I0313 10:56:54.345926 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 13 10:56:54.346133 master-0 kubenswrapper[7271]: I0313 10:56:54.345951 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 13 10:56:54.346133 master-0 kubenswrapper[7271]: I0313 10:56:54.346016 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs" (OuterVolumeSpecName: "logs") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:56:54.346133 master-0 kubenswrapper[7271]: I0313 10:56:54.346039 7271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 13 10:56:54.346133 master-0 kubenswrapper[7271]: I0313 10:56:54.346048 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:56:54.346133 master-0 kubenswrapper[7271]: I0313 10:56:54.346165 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:56:54.346862 master-0 kubenswrapper[7271]: I0313 10:56:54.346484 7271 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") on node \"master-0\" DevicePath \"\"" Mar 13 10:56:54.346862 master-0 kubenswrapper[7271]: I0313 10:56:54.346500 7271 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:56:54.346862 master-0 kubenswrapper[7271]: I0313 10:56:54.346513 7271 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 10:56:54.346862 master-0 kubenswrapper[7271]: I0313 10:56:54.346524 7271 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 13 10:56:54.346862 master-0 kubenswrapper[7271]: I0313 10:56:54.346536 7271 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 13 10:56:54.346862 master-0 kubenswrapper[7271]: I0313 10:56:54.346548 7271 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:56:54.346862 master-0 kubenswrapper[7271]: I0313 10:56:54.346559 7271 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:56:54.346862 master-0 kubenswrapper[7271]: I0313 10:56:54.346573 7271 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:56:54.349385 master-0 kubenswrapper[7271]: I0313 10:56:54.349343 7271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "533638d2-44ce-4cf8-aa47-a6b89c94621d" (UID: "533638d2-44ce-4cf8-aa47-a6b89c94621d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:56:54.447372 master-0 kubenswrapper[7271]: I0313 10:56:54.447331 7271 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:56:54.772727 master-0 kubenswrapper[7271]: I0313 10:56:54.772609 7271 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="ade40bcd87bcb5b50e27312debdd70388bd7803a0fa485aae78b3cece367b239" exitCode=0 Mar 13 10:56:54.772727 master-0 kubenswrapper[7271]: I0313 10:56:54.772664 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 10:56:54.772727 master-0 kubenswrapper[7271]: I0313 10:56:54.772698 7271 scope.go:117] "RemoveContainer" containerID="0a02a1eb2e8e166b8ba4ad221ecbd690f6cbf9e334b441e5c5096bb8f331c40f" Mar 13 10:56:54.788760 master-0 kubenswrapper[7271]: I0313 10:56:54.787959 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"34c71f954534cca434c7c802f7855a3b7861fd19181e83ce6a5c7e4eadd5d1b3"} Mar 13 10:56:54.788760 master-0 kubenswrapper[7271]: I0313 10:56:54.788038 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"ba51f5c967029e2b068021d1a83ee4f598eaadf9c3a2516df68908ac239d7445"} Mar 13 10:56:54.788760 master-0 kubenswrapper[7271]: I0313 10:56:54.788316 7271 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:54.791805 master-0 kubenswrapper[7271]: I0313 10:56:54.791227 7271 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"533638d2-44ce-4cf8-aa47-a6b89c94621d","Type":"ContainerDied","Data":"e2ab44048b41e7d6482d53b636df4ef12bcf58ac194024559096f0e679ffee57"} Mar 13 10:56:54.791805 master-0 kubenswrapper[7271]: I0313 10:56:54.791264 7271 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:54.791805 master-0 kubenswrapper[7271]: I0313 10:56:54.791276 7271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ab44048b41e7d6482d53b636df4ef12bcf58ac194024559096f0e679ffee57" Mar 13 10:56:54.798876 master-0 kubenswrapper[7271]: I0313 10:56:54.798742 7271 scope.go:117] "RemoveContainer" containerID="ade40bcd87bcb5b50e27312debdd70388bd7803a0fa485aae78b3cece367b239" Mar 13 10:56:54.821678 master-0 kubenswrapper[7271]: I0313 10:56:54.821636 7271 scope.go:117] "RemoveContainer" containerID="e07b009523c772ee55ecbb89b8fbfc4396d18404079202cc555940a21a0e5f04" Mar 13 10:56:54.839081 master-0 kubenswrapper[7271]: I0313 10:56:54.839047 7271 scope.go:117] "RemoveContainer" containerID="0a02a1eb2e8e166b8ba4ad221ecbd690f6cbf9e334b441e5c5096bb8f331c40f" Mar 13 10:56:54.849644 master-0 kubenswrapper[7271]: E0313 10:56:54.842995 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a02a1eb2e8e166b8ba4ad221ecbd690f6cbf9e334b441e5c5096bb8f331c40f\": container with ID starting with 0a02a1eb2e8e166b8ba4ad221ecbd690f6cbf9e334b441e5c5096bb8f331c40f not found: ID does not exist" containerID="0a02a1eb2e8e166b8ba4ad221ecbd690f6cbf9e334b441e5c5096bb8f331c40f" Mar 13 10:56:54.849644 master-0 kubenswrapper[7271]: I0313 10:56:54.843068 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a02a1eb2e8e166b8ba4ad221ecbd690f6cbf9e334b441e5c5096bb8f331c40f"} err="failed to get container status \"0a02a1eb2e8e166b8ba4ad221ecbd690f6cbf9e334b441e5c5096bb8f331c40f\": rpc error: code = NotFound desc = could not find container \"0a02a1eb2e8e166b8ba4ad221ecbd690f6cbf9e334b441e5c5096bb8f331c40f\": container with ID starting with 0a02a1eb2e8e166b8ba4ad221ecbd690f6cbf9e334b441e5c5096bb8f331c40f not found: ID does not exist" Mar 13 10:56:54.849644 master-0 kubenswrapper[7271]: I0313 10:56:54.843112 7271 scope.go:117] "RemoveContainer" containerID="ade40bcd87bcb5b50e27312debdd70388bd7803a0fa485aae78b3cece367b239" Mar 13 10:56:54.849644 master-0 kubenswrapper[7271]: E0313 10:56:54.843417 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade40bcd87bcb5b50e27312debdd70388bd7803a0fa485aae78b3cece367b239\": container with ID starting with ade40bcd87bcb5b50e27312debdd70388bd7803a0fa485aae78b3cece367b239 not found: ID does not exist" containerID="ade40bcd87bcb5b50e27312debdd70388bd7803a0fa485aae78b3cece367b239" Mar 13 10:56:54.849644 master-0 kubenswrapper[7271]: I0313 10:56:54.843433 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade40bcd87bcb5b50e27312debdd70388bd7803a0fa485aae78b3cece367b239"} err="failed to get container status \"ade40bcd87bcb5b50e27312debdd70388bd7803a0fa485aae78b3cece367b239\": rpc error: code = NotFound desc = could not find container \"ade40bcd87bcb5b50e27312debdd70388bd7803a0fa485aae78b3cece367b239\": container with ID starting with ade40bcd87bcb5b50e27312debdd70388bd7803a0fa485aae78b3cece367b239 not found: ID does not exist" Mar 13 10:56:54.849644 master-0 kubenswrapper[7271]: I0313 10:56:54.843444 7271 scope.go:117] "RemoveContainer" containerID="e07b009523c772ee55ecbb89b8fbfc4396d18404079202cc555940a21a0e5f04" Mar 13 10:56:54.849644 master-0 kubenswrapper[7271]: E0313 10:56:54.843635 7271 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e07b009523c772ee55ecbb89b8fbfc4396d18404079202cc555940a21a0e5f04\": container with ID starting with e07b009523c772ee55ecbb89b8fbfc4396d18404079202cc555940a21a0e5f04 not found: ID does not exist" containerID="e07b009523c772ee55ecbb89b8fbfc4396d18404079202cc555940a21a0e5f04" Mar 13 10:56:54.849644 master-0 kubenswrapper[7271]: I0313 10:56:54.843650 7271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07b009523c772ee55ecbb89b8fbfc4396d18404079202cc555940a21a0e5f04"} err="failed to get container status \"e07b009523c772ee55ecbb89b8fbfc4396d18404079202cc555940a21a0e5f04\": rpc error: code = NotFound desc = could not find container \"e07b009523c772ee55ecbb89b8fbfc4396d18404079202cc555940a21a0e5f04\": container with ID starting with e07b009523c772ee55ecbb89b8fbfc4396d18404079202cc555940a21a0e5f04 not found: ID does not exist" Mar 13 10:56:54.887664 master-0 kubenswrapper[7271]: I0313 10:56:54.887572 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:56:54.887664 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:56:54.887664 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:56:54.887664 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:56:54.887971 master-0 kubenswrapper[7271]: I0313 10:56:54.887668 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:56:55.655162 master-0 kubenswrapper[7271]: I0313 10:56:55.655113 7271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f77c8e18b751d90bc0dfe2d4e304050" path="/var/lib/kubelet/pods/5f77c8e18b751d90bc0dfe2d4e304050/volumes" Mar 13 10:56:55.655507 master-0 kubenswrapper[7271]: I0313 10:56:55.655481 7271 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 13 10:56:55.881972 master-0 kubenswrapper[7271]: I0313 10:56:55.881925 7271 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-b4x54 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 10:56:55.881972 master-0 kubenswrapper[7271]: [-]has-synced failed: reason withheld Mar 13 10:56:55.881972 master-0 kubenswrapper[7271]: [+]process-running ok Mar 13 10:56:55.881972 master-0 kubenswrapper[7271]: healthz check failed Mar 13 10:56:55.882485 master-0 kubenswrapper[7271]: I0313 10:56:55.881982 7271 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" podUID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 10:56:56.364900 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 13 10:56:56.399690 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 10:56:56.400073 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 13 10:56:56.402798 master-0 systemd[1]: kubelet.service: Consumed 2min 55.397s CPU time. Mar 13 10:56:56.435865 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 13 10:56:56.576065 master-0 kubenswrapper[33013]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:56:56.576065 master-0 kubenswrapper[33013]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 13 10:56:56.576065 master-0 kubenswrapper[33013]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:56:56.576065 master-0 kubenswrapper[33013]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:56:56.576065 master-0 kubenswrapper[33013]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 13 10:56:56.576065 master-0 kubenswrapper[33013]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 10:56:56.576953 master-0 kubenswrapper[33013]: I0313 10:56:56.576169 33013 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 10:56:56.579123 master-0 kubenswrapper[33013]: W0313 10:56:56.579087 33013 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:56:56.579123 master-0 kubenswrapper[33013]: W0313 10:56:56.579111 33013 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:56:56.579123 master-0 kubenswrapper[33013]: W0313 10:56:56.579118 33013 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:56:56.579123 master-0 kubenswrapper[33013]: W0313 10:56:56.579124 33013 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:56:56.579123 master-0 kubenswrapper[33013]: W0313 10:56:56.579129 33013 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:56:56.579123 master-0 kubenswrapper[33013]: W0313 10:56:56.579135 33013 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579142 33013 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579149 33013 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579154 33013 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579159 33013 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579164 33013 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579170 33013 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579175 33013 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579180 33013 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579185 33013 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579191 33013 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579196 33013 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579201 33013 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579206 33013 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579211 33013 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579216 33013 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579221 33013 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579226 33013 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579232 33013 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579237 33013 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:56:56.579430 master-0 kubenswrapper[33013]: W0313 10:56:56.579242 33013 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579247 33013 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579251 33013 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579258 33013 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579263 33013 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579268 33013 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579274 33013 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579281 33013 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579288 33013 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579294 33013 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579301 33013 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579307 33013 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579313 33013 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579318 33013 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579324 33013 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579332 33013 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579338 33013 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579344 33013 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579349 33013 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:56:56.580177 master-0 kubenswrapper[33013]: W0313 10:56:56.579354 33013 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579359 33013 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579365 33013 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579370 33013 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579377 33013 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579384 33013 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579389 33013 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579394 33013 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579399 33013 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579404 33013 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579410 33013 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579417 33013 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579422 33013 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579427 33013 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579433 33013 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579439 33013 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579446 33013 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579452 33013 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579458 33013 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579463 33013 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:56:56.580931 master-0 kubenswrapper[33013]: W0313 10:56:56.579469 33013 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: W0313 10:56:56.579475 33013 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: W0313 10:56:56.579480 33013 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: W0313 10:56:56.579485 33013 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: W0313 10:56:56.579490 33013 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: W0313 10:56:56.579495 33013 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: W0313 10:56:56.579500 33013 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: W0313 10:56:56.579504 33013 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579630 33013 flags.go:64] FLAG: --address="0.0.0.0" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579643 33013 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579653 33013 flags.go:64] FLAG: --anonymous-auth="true" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579661 33013 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579668 33013 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579675 33013 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579683 33013 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579691 33013 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579698 33013 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579704 33013 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579711 33013 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579718 33013 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579724 33013 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579731 33013 flags.go:64] FLAG: --cgroup-root="" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579736 33013 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 13 10:56:56.581667 master-0 kubenswrapper[33013]: I0313 10:56:56.579743 33013 flags.go:64] FLAG: --client-ca-file="" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579749 33013 flags.go:64] FLAG: --cloud-config="" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579755 33013 flags.go:64] FLAG: --cloud-provider="" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579773 33013 flags.go:64] FLAG: --cluster-dns="[]" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579781 33013 flags.go:64] FLAG: --cluster-domain="" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579787 33013 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579793 33013 flags.go:64] FLAG: --config-dir="" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579799 33013 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579808 33013 flags.go:64] FLAG: --container-log-max-files="5" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579816 33013 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579823 33013 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579830 33013 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579837 33013 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579845 33013 flags.go:64] FLAG: --contention-profiling="false" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579851 33013 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579857 33013 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579864 33013 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579870 33013 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579877 33013 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579884 33013 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579890 33013 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579896 33013 flags.go:64] FLAG: --enable-load-reader="false" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579902 33013 flags.go:64] FLAG: --enable-server="true" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579907 33013 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579916 33013 flags.go:64] FLAG: --event-burst="100" Mar 13 10:56:56.582674 master-0 kubenswrapper[33013]: I0313 10:56:56.579922 33013 flags.go:64] FLAG: --event-qps="50" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.579928 33013 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.579934 33013 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.579940 33013 flags.go:64] FLAG: --eviction-hard="" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.579947 33013 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.579953 33013 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.579959 33013 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.579965 33013 flags.go:64] FLAG: --eviction-soft="" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.579971 33013 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.579977 33013 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.579983 33013 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.579989 33013 flags.go:64] FLAG: --experimental-mounter-path="" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.579995 33013 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.580001 33013 flags.go:64] FLAG: --fail-swap-on="true" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.580006 33013 flags.go:64] FLAG: --feature-gates="" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.580013 33013 flags.go:64] FLAG: --file-check-frequency="20s" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.580020 33013 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.580026 33013 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.580033 33013 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.580039 33013 flags.go:64] FLAG: --healthz-port="10248" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.580045 33013 flags.go:64] FLAG: --help="false" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.580051 33013 flags.go:64] FLAG: --hostname-override="" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.580056 33013 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.580062 33013 flags.go:64] FLAG: --http-check-frequency="20s" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.580068 33013 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 13 10:56:56.583633 master-0 kubenswrapper[33013]: I0313 10:56:56.580074 33013 flags.go:64] FLAG: --image-credential-provider-config="" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580080 33013 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580085 33013 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580092 33013 flags.go:64] FLAG: --image-service-endpoint="" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580097 33013 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580103 33013 flags.go:64] FLAG: --kube-api-burst="100" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580109 33013 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580115 33013 flags.go:64] FLAG: --kube-api-qps="50" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580121 33013 flags.go:64] FLAG: --kube-reserved="" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580126 33013 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580132 33013 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580138 33013 flags.go:64] FLAG: --kubelet-cgroups="" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580143 33013 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580149 33013 flags.go:64] FLAG: --lock-file="" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580164 33013 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580169 33013 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580176 33013 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580184 33013 flags.go:64] FLAG: --log-json-split-stream="false" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580190 33013 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580197 33013 flags.go:64] FLAG: --log-text-split-stream="false" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580202 33013 flags.go:64] FLAG: --logging-format="text" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580208 33013 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580215 33013 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580220 33013 flags.go:64] FLAG: --manifest-url="" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580226 33013 flags.go:64] FLAG: --manifest-url-header="" Mar 13 10:56:56.584678 master-0 kubenswrapper[33013]: I0313 10:56:56.580234 33013 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580240 33013 flags.go:64] FLAG: --max-open-files="1000000" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580247 33013 flags.go:64] FLAG: --max-pods="110" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580252 33013 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580258 33013 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580264 33013 flags.go:64] FLAG: --memory-manager-policy="None" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580270 33013 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580276 33013 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580282 33013 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580288 33013 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580302 33013 flags.go:64] FLAG: --node-status-max-images="50" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580308 33013 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580314 33013 flags.go:64] FLAG: --oom-score-adj="-999" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580320 33013 flags.go:64] FLAG: --pod-cidr="" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580326 33013 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580335 33013 flags.go:64] FLAG: --pod-manifest-path="" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580341 33013 flags.go:64] FLAG: --pod-max-pids="-1" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580346 33013 flags.go:64] FLAG: --pods-per-core="0" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580352 33013 flags.go:64] FLAG: --port="10250" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580358 33013 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580364 33013 flags.go:64] FLAG: --provider-id="" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580370 33013 flags.go:64] FLAG: --qos-reserved="" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580377 33013 flags.go:64] FLAG: --read-only-port="10255" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580384 33013 flags.go:64] FLAG: --register-node="true" Mar 13 10:56:56.585971 master-0 kubenswrapper[33013]: I0313 10:56:56.580389 33013 flags.go:64] FLAG: --register-schedulable="true" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580396 33013 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580405 33013 flags.go:64] FLAG: --registry-burst="10" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580411 33013 flags.go:64] FLAG: --registry-qps="5" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580418 33013 flags.go:64] FLAG: --reserved-cpus="" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580423 33013 flags.go:64] FLAG: --reserved-memory="" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580430 33013 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580436 33013 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580443 33013 flags.go:64] FLAG: --rotate-certificates="false" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580448 33013 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580454 33013 flags.go:64] FLAG: --runonce="false" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580460 33013 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580466 33013 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580472 33013 flags.go:64] FLAG: --seccomp-default="false" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580478 33013 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580484 33013 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580490 33013 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580496 33013 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580501 33013 flags.go:64] FLAG: --storage-driver-password="root" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580507 33013 flags.go:64] FLAG: --storage-driver-secure="false" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580513 33013 flags.go:64] FLAG: --storage-driver-table="stats" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580519 33013 flags.go:64] FLAG: --storage-driver-user="root" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580526 33013 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580532 33013 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580538 33013 flags.go:64] FLAG: --system-cgroups="" Mar 13 10:56:56.587147 master-0 kubenswrapper[33013]: I0313 10:56:56.580544 33013 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: I0313 10:56:56.580553 33013 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: I0313 10:56:56.580559 33013 flags.go:64] FLAG: --tls-cert-file="" Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: I0313 10:56:56.580565 33013 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: I0313 10:56:56.580572 33013 flags.go:64] FLAG: --tls-min-version="" Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: I0313 10:56:56.580578 33013 flags.go:64] FLAG: --tls-private-key-file="" Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: I0313 10:56:56.580605 33013 flags.go:64] FLAG: --topology-manager-policy="none" Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: I0313 10:56:56.580611 33013 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: I0313 10:56:56.580617 33013 flags.go:64] FLAG: --topology-manager-scope="container" Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: I0313 10:56:56.580623 33013 flags.go:64] FLAG: --v="2" Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: I0313 10:56:56.580631 33013 flags.go:64] FLAG: --version="false" Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: I0313 10:56:56.580641 33013 flags.go:64] FLAG: --vmodule="" Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: I0313 10:56:56.580648 33013 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: I0313 10:56:56.580654 33013 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: W0313 10:56:56.580787 33013 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: W0313 10:56:56.580795 33013 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: W0313 10:56:56.580801 33013 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: W0313 10:56:56.580808 33013 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: W0313 10:56:56.580813 33013 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: W0313 10:56:56.580818 33013 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: W0313 10:56:56.580824 33013 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: W0313 10:56:56.580829 33013 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: W0313 10:56:56.580834 33013 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:56:56.588194 master-0 kubenswrapper[33013]: W0313 10:56:56.580839 33013 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580844 33013 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580850 33013 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580854 33013 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580859 33013 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580864 33013 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580870 33013 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580876 33013 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580883 33013 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580889 33013 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580902 33013 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580909 33013 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580915 33013 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580921 33013 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580926 33013 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580931 33013 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580938 33013 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580945 33013 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580952 33013 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:56:56.589236 master-0 kubenswrapper[33013]: W0313 10:56:56.580958 33013 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.580967 33013 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.580973 33013 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.580979 33013 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.580985 33013 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.580991 33013 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.580996 33013 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.581002 33013 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.581007 33013 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.581013 33013 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.581018 33013 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.581023 33013 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.581029 33013 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.581034 33013 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.581039 33013 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.581044 33013 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.581050 33013 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.581055 33013 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.581061 33013 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.581066 33013 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:56:56.590786 master-0 kubenswrapper[33013]: W0313 10:56:56.581071 33013 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581076 33013 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581081 33013 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581088 33013 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581095 33013 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581101 33013 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581107 33013 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581113 33013 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581119 33013 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581124 33013 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581129 33013 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581135 33013 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581141 33013 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581146 33013 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581152 33013 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581158 33013 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581163 33013 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581168 33013 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581173 33013 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581178 33013 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:56:56.591789 master-0 kubenswrapper[33013]: W0313 10:56:56.581183 33013 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: W0313 10:56:56.581189 33013 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: W0313 10:56:56.581194 33013 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: W0313 10:56:56.581199 33013 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: I0313 10:56:56.581210 33013 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: I0313 10:56:56.591539 33013 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: I0313 10:56:56.591563 33013 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: W0313 10:56:56.591758 33013 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: W0313 10:56:56.591766 33013 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: W0313 10:56:56.591771 33013 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: W0313 10:56:56.591775 33013 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: W0313 10:56:56.591780 33013 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: W0313 10:56:56.591787 33013 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: W0313 10:56:56.591791 33013 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: W0313 10:56:56.591795 33013 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:56:56.592855 master-0 kubenswrapper[33013]: W0313 10:56:56.591799 33013 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591803 33013 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591809 33013 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591813 33013 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591818 33013 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591822 33013 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591826 33013 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591830 33013 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591834 33013 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591840 33013 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591844 33013 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591848 33013 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591852 33013 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591855 33013 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591859 33013 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591863 33013 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591866 33013 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591870 33013 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591874 33013 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591878 33013 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:56:56.593520 master-0 kubenswrapper[33013]: W0313 10:56:56.591881 33013 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.591885 33013 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.591893 33013 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.591897 33013 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.591900 33013 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.591904 33013 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.591915 33013 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.591919 33013 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.591964 33013 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.591971 33013 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.591975 33013 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.591978 33013 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.591982 33013 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.591987 33013 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.592018 33013 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.592022 33013 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.592027 33013 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.592031 33013 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.592036 33013 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:56:56.594961 master-0 kubenswrapper[33013]: W0313 10:56:56.592040 33013 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592044 33013 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592048 33013 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592052 33013 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592056 33013 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592059 33013 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592063 33013 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592070 33013 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592074 33013 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592079 33013 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592084 33013 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592088 33013 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592092 33013 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592096 33013 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592100 33013 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592103 33013 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592107 33013 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592111 33013 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592115 33013 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:56:56.595763 master-0 kubenswrapper[33013]: W0313 10:56:56.592121 33013 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:56:56.596516 master-0 kubenswrapper[33013]: W0313 10:56:56.592125 33013 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:56:56.596516 master-0 kubenswrapper[33013]: W0313 10:56:56.592128 33013 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:56:56.596516 master-0 kubenswrapper[33013]: W0313 10:56:56.592132 33013 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:56:56.596516 master-0 kubenswrapper[33013]: W0313 10:56:56.592143 33013 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:56:56.596516 master-0 kubenswrapper[33013]: W0313 10:56:56.592147 33013 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:56:56.596516 master-0 kubenswrapper[33013]: I0313 10:56:56.592153 33013 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 10:56:56.596516 master-0 kubenswrapper[33013]: W0313 10:56:56.592430 33013 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 10:56:56.596516 master-0 kubenswrapper[33013]: W0313 10:56:56.592442 33013 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 10:56:56.596516 master-0 kubenswrapper[33013]: W0313 10:56:56.592448 33013 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 10:56:56.596516 master-0 kubenswrapper[33013]: W0313 10:56:56.592452 33013 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 10:56:56.596516 master-0 kubenswrapper[33013]: W0313 10:56:56.592456 33013 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 10:56:56.596516 master-0 kubenswrapper[33013]: W0313 10:56:56.592460 33013 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 10:56:56.596516 master-0 kubenswrapper[33013]: W0313 10:56:56.592467 33013 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 10:56:56.596516 master-0 kubenswrapper[33013]: W0313 10:56:56.592472 33013 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592476 33013 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592480 33013 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592483 33013 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592487 33013 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592491 33013 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592495 33013 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592499 33013 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592503 33013 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592507 33013 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592511 33013 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592517 33013 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592521 33013 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592525 33013 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592528 33013 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592532 33013 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592537 33013 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592541 33013 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592545 33013 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592550 33013 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 10:56:56.597456 master-0 kubenswrapper[33013]: W0313 10:56:56.592554 33013 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592558 33013 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592562 33013 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592568 33013 feature_gate.go:330] unrecognized feature gate: Example Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592572 33013 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592576 33013 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592611 33013 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592615 33013 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592619 33013 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592623 33013 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592626 33013 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592630 33013 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592634 33013 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592638 33013 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592642 33013 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592646 33013 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592652 33013 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592657 33013 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592660 33013 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592664 33013 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 10:56:56.598628 master-0 kubenswrapper[33013]: W0313 10:56:56.592668 33013 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592672 33013 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592675 33013 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592679 33013 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592685 33013 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592700 33013 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592704 33013 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592708 33013 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592715 33013 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592719 33013 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592723 33013 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592727 33013 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592731 33013 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592735 33013 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592738 33013 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592742 33013 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592746 33013 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592750 33013 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592753 33013 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 10:56:56.599496 master-0 kubenswrapper[33013]: W0313 10:56:56.592757 33013 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 10:56:56.600447 master-0 kubenswrapper[33013]: W0313 10:56:56.592762 33013 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 10:56:56.600447 master-0 kubenswrapper[33013]: W0313 10:56:56.592769 33013 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 10:56:56.600447 master-0 kubenswrapper[33013]: W0313 10:56:56.592778 33013 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 10:56:56.600447 master-0 kubenswrapper[33013]: W0313 10:56:56.592783 33013 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 10:56:56.600447 master-0 kubenswrapper[33013]: W0313 10:56:56.592787 33013 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 10:56:56.600447 master-0 kubenswrapper[33013]: I0313 10:56:56.592793 33013 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 10:56:56.600447 master-0 kubenswrapper[33013]: I0313 10:56:56.592978 33013 server.go:940] "Client rotation is on, will bootstrap in background" Mar 13 10:56:56.600447 master-0 kubenswrapper[33013]: I0313 10:56:56.595008 33013 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 13 10:56:56.600447 master-0 kubenswrapper[33013]: I0313 10:56:56.595078 33013 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 13 10:56:56.600447 master-0 kubenswrapper[33013]: I0313 10:56:56.595344 33013 server.go:997] "Starting client certificate rotation" Mar 13 10:56:56.600447 master-0 kubenswrapper[33013]: I0313 10:56:56.595354 33013 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 13 10:56:56.600447 master-0 kubenswrapper[33013]: I0313 10:56:56.595497 33013 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-14 10:24:19 +0000 UTC, rotation deadline is 2026-03-14 05:57:09.570965839 +0000 UTC Mar 13 10:56:56.600998 master-0 kubenswrapper[33013]: I0313 10:56:56.595524 33013 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h0m12.975443315s for next certificate rotation Mar 13 10:56:56.600998 master-0 kubenswrapper[33013]: I0313 10:56:56.596078 33013 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 10:56:56.600998 master-0 kubenswrapper[33013]: I0313 10:56:56.597786 33013 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 10:56:56.600998 master-0 kubenswrapper[33013]: I0313 10:56:56.600743 33013 log.go:25] "Validated CRI v1 runtime API" Mar 13 10:56:56.608166 master-0 kubenswrapper[33013]: I0313 10:56:56.607960 33013 log.go:25] "Validated CRI v1 image API" Mar 13 10:56:56.610607 master-0 kubenswrapper[33013]: I0313 10:56:56.610122 33013 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 10:56:56.619521 master-0 kubenswrapper[33013]: I0313 10:56:56.619451 33013 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 b89da96d-e8b7-46f7-a5b4-754b0b40734d:/dev/vda3] Mar 13 10:56:56.620548 master-0 kubenswrapper[33013]: I0313 10:56:56.619498 33013 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/03e6a6324c34d7bf4b86e7eced1bfea7054e77f627892ff596f0fda33c1d39e2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/03e6a6324c34d7bf4b86e7eced1bfea7054e77f627892ff596f0fda33c1d39e2/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/04e2d5b4e65ad4d6e19280743e00933d366bcbdfdc3c5d7c64aba41673f1a662/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/04e2d5b4e65ad4d6e19280743e00933d366bcbdfdc3c5d7c64aba41673f1a662/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/05e1407c7b27a4b6e8d757f9a77812ff8adcb8afeba6392964446e6020251829/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/05e1407c7b27a4b6e8d757f9a77812ff8adcb8afeba6392964446e6020251829/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0703b273c4e03bb7b56beaec08bcd6e173eecb63d506d0b227eed01f4963105d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0703b273c4e03bb7b56beaec08bcd6e173eecb63d506d0b227eed01f4963105d/userdata/shm major:0 minor:369 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/09802d7d0a05bccad87d5ddf8cff0a47cdae0568f0f82013285bb0d1dc8f5424/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/09802d7d0a05bccad87d5ddf8cff0a47cdae0568f0f82013285bb0d1dc8f5424/userdata/shm major:0 minor:783 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/11fee05d2806af61a462a41f2f8d14b1a8fc382251199b04114ff9afb908d5a1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/11fee05d2806af61a462a41f2f8d14b1a8fc382251199b04114ff9afb908d5a1/userdata/shm major:0 minor:831 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/13a004f2f44b204dd23b4531ea2ef3d4457cfe84fd8fdc544d2f9015f5747d61/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/13a004f2f44b204dd23b4531ea2ef3d4457cfe84fd8fdc544d2f9015f5747d61/userdata/shm major:0 minor:457 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/164736e7418a21cde804e102fe3d184a2797171e5f4bf83a8bf76c7c9b72cc41/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/164736e7418a21cde804e102fe3d184a2797171e5f4bf83a8bf76c7c9b72cc41/userdata/shm major:0 minor:758 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/186ea687b2b873b969d378ad858dd467c244f19248903ea4dfa2320cfbb636aa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/186ea687b2b873b969d378ad858dd467c244f19248903ea4dfa2320cfbb636aa/userdata/shm major:0 minor:259 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1e94f5c752f1fded64a3ee340fb34a998ddce5e3acb0a9a9e83f157fbccc7394/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1e94f5c752f1fded64a3ee340fb34a998ddce5e3acb0a9a9e83f157fbccc7394/userdata/shm major:0 minor:153 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1ee97873740b9b10b1888585dd4cf251d4592642ab8be20585d1c34abd206ca4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1ee97873740b9b10b1888585dd4cf251d4592642ab8be20585d1c34abd206ca4/userdata/shm major:0 minor:763 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/21ea23db5a94394fed39e6756a1919898e68c50238c79a5641bf3126f4447416/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/21ea23db5a94394fed39e6756a1919898e68c50238c79a5641bf3126f4447416/userdata/shm major:0 minor:342 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/29733a7ea73e3174735d72b2210dd940a71dd3f008e394e00385294d9ba36ee3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/29733a7ea73e3174735d72b2210dd940a71dd3f008e394e00385294d9ba36ee3/userdata/shm major:0 minor:1036 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2b19f149420c8c5bdd28117ec0014c144ba254d289aeb742b7f29c424c5d661a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2b19f149420c8c5bdd28117ec0014c144ba254d289aeb742b7f29c424c5d661a/userdata/shm major:0 minor:1078 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2baa20e270e178f3e40e4ef86226c93b0ff3020bf6dac2cb5d4f63eecde92557/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2baa20e270e178f3e40e4ef86226c93b0ff3020bf6dac2cb5d4f63eecde92557/userdata/shm major:0 minor:1199 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2c906939264631f5617f60445cdb650e10cc3bf3d0cf16dc4b104f010debfbc1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2c906939264631f5617f60445cdb650e10cc3bf3d0cf16dc4b104f010debfbc1/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2d8c2c573acc02ece57d91166be062a427bbc681f8936d54a20df38a4936dc09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2d8c2c573acc02ece57d91166be062a427bbc681f8936d54a20df38a4936dc09/userdata/shm major:0 minor:322 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/324185d8aba3ef3e122592b3ddf0fb321d8d4d7598b9bfc330b8735d340f3d78/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/324185d8aba3ef3e122592b3ddf0fb321d8d4d7598b9bfc330b8735d340f3d78/userdata/shm major:0 minor:1072 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/326eff89f13ef648d073ebec6b104b4118323f6c130c4cf1f4122764a419957e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/326eff89f13ef648d073ebec6b104b4118323f6c130c4cf1f4122764a419957e/userdata/shm major:0 minor:762 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/34c705593dd577219134e52fa5f1f4ac1bf3a254e75ac17359d23f2432c84086/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/34c705593dd577219134e52fa5f1f4ac1bf3a254e75ac17359d23f2432c84086/userdata/shm major:0 minor:341 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3a76099d084f4d1745cd0462dfdd6adb21bbcc918adfa4a88776287e0186cf5c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3a76099d084f4d1745cd0462dfdd6adb21bbcc918adfa4a88776287e0186cf5c/userdata/shm major:0 minor:491 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3a7d2e60ddc43ee697baaf390993508cae16887b2a5d4cb1ef47d6c884025855/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3a7d2e60ddc43ee697baaf390993508cae16887b2a5d4cb1ef47d6c884025855/userdata/shm major:0 minor:784 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3ae050b82d12b25aa641b68f4f3b48f026796f7cf0455a63a6e8d7c183a407db/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3ae050b82d12b25aa641b68f4f3b48f026796f7cf0455a63a6e8d7c183a407db/userdata/shm major:0 minor:806 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3b9f539be02f519c82f90f79644538b0615d221de57b1fd6c7c4726d8ebe602e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3b9f539be02f519c82f90f79644538b0615d221de57b1fd6c7c4726d8ebe602e/userdata/shm major:0 minor:89 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3ccf2b15838415a4be52f403df345301db18d66d37f6fa09df717882bb3b0fda/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3ccf2b15838415a4be52f403df345301db18d66d37f6fa09df717882bb3b0fda/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3d5f1f4095f01f19dc1c943afe4e4b0c9a80883c821d2a0dcacc2ad4ee4f8b25/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3d5f1f4095f01f19dc1c943afe4e4b0c9a80883c821d2a0dcacc2ad4ee4f8b25/userdata/shm major:0 minor:1004 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/40709622bc83dd44130ec2874b3fecd53ec9c74c9ec5ea39d2f7a0dcddaf6a5c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/40709622bc83dd44130ec2874b3fecd53ec9c74c9ec5ea39d2f7a0dcddaf6a5c/userdata/shm major:0 minor:375 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/40bc8729edbc545950cfd4248f291a2938cf20232c66e767905dda5ad583859c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/40bc8729edbc545950cfd4248f291a2938cf20232c66e767905dda5ad583859c/userdata/shm major:0 minor:238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/427d8baf8b36c464fef89b4b9363187b5106a9ee18a5220827b5f1bf40b93c0d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/427d8baf8b36c464fef89b4b9363187b5106a9ee18a5220827b5f1bf40b93c0d/userdata/shm major:0 minor:330 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/48e8b214299b9f4db7879f744943297a290919968a8d4c7d50b6a78a9ada043a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/48e8b214299b9f4db7879f744943297a290919968a8d4c7d50b6a78a9ada043a/userdata/shm major:0 minor:488 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4d051c8ad32b7669f426e6d80e6632cee3e398cb08f827d5c2ff51c92ed352a3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4d051c8ad32b7669f426e6d80e6632cee3e398cb08f827d5c2ff51c92ed352a3/userdata/shm major:0 minor:886 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/50614fe1bae99eef2fccbbf06f52ab65208692c910cfe5fe3711fe68d7b32786/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/50614fe1bae99eef2fccbbf06f52ab65208692c910cfe5fe3711fe68d7b32786/userdata/shm major:0 minor:460 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/52b02d0e4a7c2c479465f8242f01da199717910ea7b898fbcda40528a83b169e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/52b02d0e4a7c2c479465f8242f01da199717910ea7b898fbcda40528a83b169e/userdata/shm major:0 minor:786 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/571b031f77274fed6328f6e07d585dcfd8fd69050ed40ecc9a578fd8f3044381/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/571b031f77274fed6328f6e07d585dcfd8fd69050ed40ecc9a578fd8f3044381/userdata/shm major:0 minor:326 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5e13cffe1976b1fe526e31bded64fe9c448e434d19da41cefd16fb763080f8bc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5e13cffe1976b1fe526e31bded64fe9c448e434d19da41cefd16fb763080f8bc/userdata/shm major:0 minor:791 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/60b98cd864b31c5d3ce33f7e617eaf280215ab256f27e08a3aa813b955cd4550/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/60b98cd864b31c5d3ce33f7e617eaf280215ab256f27e08a3aa813b955cd4550/userdata/shm major:0 minor:779 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/64e301f64932b9e42866a17f98ce668f6dac597e77b8c15551a291086a0c377b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/64e301f64932b9e42866a17f98ce668f6dac597e77b8c15551a291086a0c377b/userdata/shm major:0 minor:490 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6724c795aeefb2de7ccb8edf6dd545a4648253bccf79de04ddb0f389fe53a8e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6724c795aeefb2de7ccb8edf6dd545a4648253bccf79de04ddb0f389fe53a8e7/userdata/shm major:0 minor:1074 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6ae68534d60ba95b9a0cc4c1bb4a76e1716cb7e67493f9e7d66360f5bc7a13b3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6ae68534d60ba95b9a0cc4c1bb4a76e1716cb7e67493f9e7d66360f5bc7a13b3/userdata/shm major:0 minor:247 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6b0b21ce8c91e31c5d3fafde2dc1d7d9feb5cca70a9bf65bb781c974d266575e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6b0b21ce8c91e31c5d3fafde2dc1d7d9feb5cca70a9bf65bb781c974d266575e/userdata/shm major:0 minor:328 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/72d184f62fa595f3a7463191ce616e1db275cdc732a1ab006b74065651d152d4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/72d184f62fa595f3a7463191ce616e1db275cdc732a1ab006b74065651d152d4/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/76246e9a1d2379cb0958975bb664cf21b612b44d022ee860fbd36d45bdea98e3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/76246e9a1d2379cb0958975bb664cf21b612b44d022ee860fbd36d45bdea98e3/userdata/shm major:0 minor:1006 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7cf1f1393ed4dc75d53053e58fde65a2d67118e8d37c0361a92ae7802d8b760d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7cf1f1393ed4dc75d53053e58fde65a2d67118e8d37c0361a92ae7802d8b760d/userdata/shm major:0 minor:701 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/823dd75fa90067312411e552ed573617320e3f633eba91399bcfb19342dfaab8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/823dd75fa90067312411e552ed573617320e3f633eba91399bcfb19342dfaab8/userdata/shm major:0 minor:1130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/824d2e06fb02d75eff387f4090fa04e983a89eabed59a10155690e2b0750ea37/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/824d2e06fb02d75eff387f4090fa04e983a89eabed59a10155690e2b0750ea37/userdata/shm major:0 minor:869 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8b123d8cf30f9a2e585e105a6d1e6a093488b477d996f0893a6a50a5c5b92b38/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8b123d8cf30f9a2e585e105a6d1e6a093488b477d996f0893a6a50a5c5b92b38/userdata/shm major:0 minor:487 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8b939255ebac1f66f189aaed584b6e7c61496fc54de0eca1dee70e7efa443532/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8b939255ebac1f66f189aaed584b6e7c61496fc54de0eca1dee70e7efa443532/userdata/shm major:0 minor:726 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f00c30651131dc152cafde2dc4f58ee3081e7ee0af524ba7783523529e49fba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f00c30651131dc152cafde2dc4f58ee3081e7ee0af524ba7783523529e49fba/userdata/shm major:0 minor:264 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f43bd68b145b0d4f8b86d52ece37d2ddf197260fbbf0dee345fc0c4e0be32ff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f43bd68b145b0d4f8b86d52ece37d2ddf197260fbbf0dee345fc0c4e0be32ff/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/97a747ef867987de8a139981f17a1b239fcb5c28199b67ab78094a7f8154dc7c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/97a747ef867987de8a139981f17a1b239fcb5c28199b67ab78094a7f8154dc7c/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/981440e84066752679558f1f2c3a39bee9a4847d1a094c571e2638b8a5f2290d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/981440e84066752679558f1f2c3a39bee9a4847d1a094c571e2638b8a5f2290d/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9a02e284386b73dcacdc66689703a6ce2a89d3ae22d94162ffdd3488c53d3335/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9a02e284386b73dcacdc66689703a6ce2a89d3ae22d94162ffdd3488c53d3335/userdata/shm major:0 minor:308 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9c60be2c4f1a0c68603912b47e268ac1ef0712a8ed512ee20014ad96ccd12d01/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9c60be2c4f1a0c68603912b47e268ac1ef0712a8ed512ee20014ad96ccd12d01/userdata/shm major:0 minor:529 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9d325882d9051ed4dcae015a4e37aab1ae44cf25e837fbca8dbbfcfc4d9934e5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9d325882d9051ed4dcae015a4e37aab1ae44cf25e837fbca8dbbfcfc4d9934e5/userdata/shm major:0 minor:458 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a283bd1cab37da2c35528d1fc1a0a03b24555657ec54c53a6d0fcce5a530df6a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a283bd1cab37da2c35528d1fc1a0a03b24555657ec54c53a6d0fcce5a530df6a/userdata/shm major:0 minor:792 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a63b48f34aee10c2a5c7b02c8aaeb3e69aba820b8bbd971ae28a10945e9803c8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a63b48f34aee10c2a5c7b02c8aaeb3e69aba820b8bbd971ae28a10945e9803c8/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a9a8fe1cfe7b05b245d74255fe06ada29f995a6a0f2d268f7c34ca85219321b3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a9a8fe1cfe7b05b245d74255fe06ada29f995a6a0f2d268f7c34ca85219321b3/userdata/shm major:0 minor:303 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b307d791edfe64a9b684cb84f780359a81088e7f734461ffa9d77ba51707349a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b307d791edfe64a9b684cb84f780359a81088e7f734461ffa9d77ba51707349a/userdata/shm major:0 minor:800 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c3ef257d3865e4ef11b927a21a93e51aafc4c9ebd98baa7d651806b2a01e30df/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c3ef257d3865e4ef11b927a21a93e51aafc4c9ebd98baa7d651806b2a01e30df/userdata/shm major:0 minor:236 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c797020833454d5ed2c33acc860a0f30fce513778328e3b025208a981e1fff3f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c797020833454d5ed2c33acc860a0f30fce513778328e3b025208a981e1fff3f/userdata/shm major:0 minor:74 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c7d9738c3adc0c979eef42141f9dc2b629b15190348d5c5364a237fdd93a9dff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c7d9738c3adc0c979eef42141f9dc2b629b15190348d5c5364a237fdd93a9dff/userdata/shm major:0 minor:93 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c7ee7c0157ea1a3aa7abb463c26a21b9f9c80a7c51726be3dad9c08112783426/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c7ee7c0157ea1a3aa7abb463c26a21b9f9c80a7c51726be3dad9c08112783426/userdata/shm major:0 minor:290 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c9ad49e483c47605d1fda52e7e670f8d0dd2ee6b7b1f41e2ebd66c5396af192f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c9ad49e483c47605d1fda52e7e670f8d0dd2ee6b7b1f41e2ebd66c5396af192f/userdata/shm major:0 minor:738 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cc41129be016cbf901b0a2cc5f025302b5359f5df446ef56a8371863c53e45e5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cc41129be016cbf901b0a2cc5f025302b5359f5df446ef56a8371863c53e45e5/userdata/shm major:0 minor:650 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cf9561f8a446435dd3e05b7973785f1768a9224b0e43a36e35e60c9ec1bc16a2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cf9561f8a446435dd3e05b7973785f1768a9224b0e43a36e35e60c9ec1bc16a2/userdata/shm major:0 minor:340 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cfcbc3062b54d8acdba8fb18315546ff9d2740da776054d7f9430b71a5238353/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cfcbc3062b54d8acdba8fb18315546ff9d2740da776054d7f9430b71a5238353/userdata/shm major:0 minor:251 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d019d509921c4d166cd7651a1a35a29172d4e8a0b6f47b7d8c8b1a18d02dbf3c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d019d509921c4d166cd7651a1a35a29172d4e8a0b6f47b7d8c8b1a18d02dbf3c/userdata/shm major:0 minor:1167 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d32de03a3a8ddd97f3e65197fb212f2ef727ae3a334417e63b4520866c016ec6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d32de03a3a8ddd97f3e65197fb212f2ef727ae3a334417e63b4520866c016ec6/userdata/shm major:0 minor:280 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d4855d0948cb05692ea183985279dcd65ea773074cc3b3ff4694481fd014efe8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d4855d0948cb05692ea183985279dcd65ea773074cc3b3ff4694481fd014efe8/userdata/shm major:0 minor:428 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d56ffe6fa9b01bb963c33e630f78eeefc536f0ea18493c909ad582b0bbe668a2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d56ffe6fa9b01bb963c33e630f78eeefc536f0ea18493c909ad582b0bbe668a2/userdata/shm major:0 minor:724 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d5d5f29010412c6336405d3c5516283cb7d7f5b2df47504d4448651a9a52ed98/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d5d5f29010412c6336405d3c5516283cb7d7f5b2df47504d4448651a9a52ed98/userdata/shm major:0 minor:329 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d6da84d85f2972436dd4f3787492391fdf9d5e5e6bdc8d3e5f13761666cdfd3b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d6da84d85f2972436dd4f3787492391fdf9d5e5e6bdc8d3e5f13761666cdfd3b/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d9ff345f3e6004990e637fa6bd4c1c17fad38322042b096639037cf7570053ac/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d9ff345f3e6004990e637fa6bd4c1c17fad38322042b096639037cf7570053ac/userdata/shm major:0 minor:722 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dd93ec4fe47e71fd21c0051085976706d225fa5cba2fcde1e22ce417bdc6d6e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dd93ec4fe47e71fd21c0051085976706d225fa5cba2fcde1e22ce417bdc6d6e7/userdata/shm major:0 minor:593 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e1e2b9079d6118c595611f9ff8bcc0950650ff2a128d6d9e608f418bd87daef1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e1e2b9079d6118c595611f9ff8bcc0950650ff2a128d6d9e608f418bd87daef1/userdata/shm major:0 minor:810 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f2d2633be257a3a88aea0a33608fb20bbf5d7f015d883bb5bf430f64888b7d47/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f2d2633be257a3a88aea0a33608fb20bbf5d7f015d883bb5bf430f64888b7d47/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f4e5ae9525fe60361341f421bac6c1976c0d8c217394b9ae9ea8bc8043db8345/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f4e5ae9525fe60361341f421bac6c1976c0d8c217394b9ae9ea8bc8043db8345/userdata/shm major:0 minor:1010 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fcd78f90ad99c247dece0b85a206d1ac457560cacc8ddad5d00adc32257026d1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fcd78f90ad99c247dece0b85a206d1ac457560cacc8ddad5d00adc32257026d1/userdata/shm major:0 minor:750 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/05a72a4c-5ce8-49d1-8e4f-334f63d4e987/volumes/kubernetes.io~projected/kube-api-access-btws6:{mountpoint:/var/lib/kubelet/pods/05a72a4c-5ce8-49d1-8e4f-334f63d4e987/volumes/kubernetes.io~projected/kube-api-access-btws6 major:0 minor:563 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/05a72a4c-5ce8-49d1-8e4f-334f63d4e987/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/05a72a4c-5ce8-49d1-8e4f-334f63d4e987/volumes/kubernetes.io~secret/cert major:0 minor:559 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/070b85a0-f076-4750-aa00-dabba401dc75/volumes/kubernetes.io~projected/kube-api-access-nlmhn:{mountpoint:/var/lib/kubelet/pods/070b85a0-f076-4750-aa00-dabba401dc75/volumes/kubernetes.io~projected/kube-api-access-nlmhn major:0 minor:760 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/070b85a0-f076-4750-aa00-dabba401dc75/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/070b85a0-f076-4750-aa00-dabba401dc75/volumes/kubernetes.io~secret/cert major:0 minor:755 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/070b85a0-f076-4750-aa00-dabba401dc75/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/070b85a0-f076-4750-aa00-dabba401dc75/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:756 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ac1a605-d2d5-4004-96f5-121c20555bde/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/0ac1a605-d2d5-4004-96f5-121c20555bde/volumes/kubernetes.io~projected/kube-api-access major:0 minor:730 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ac1a605-d2d5-4004-96f5-121c20555bde/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0ac1a605-d2d5-4004-96f5-121c20555bde/volumes/kubernetes.io~secret/serving-cert major:0 minor:721 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/11927952-723f-4d6d-922b-73139abe8877/volumes/kubernetes.io~projected/kube-api-access-kgb25:{mountpoint:/var/lib/kubelet/pods/11927952-723f-4d6d-922b-73139abe8877/volumes/kubernetes.io~projected/kube-api-access-kgb25 major:0 minor:571 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/11927952-723f-4d6d-922b-73139abe8877/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/11927952-723f-4d6d-922b-73139abe8877/volumes/kubernetes.io~secret/metrics-tls major:0 minor:573 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1434c4a2-5c4d-478a-a16a-7d6a52ea3099/volumes/kubernetes.io~projected/kube-api-access-qqjkf:{mountpoint:/var/lib/kubelet/pods/1434c4a2-5c4d-478a-a16a-7d6a52ea3099/volumes/kubernetes.io~projected/kube-api-access-qqjkf major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1434c4a2-5c4d-478a-a16a-7d6a52ea3099/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1434c4a2-5c4d-478a-a16a-7d6a52ea3099/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14f6e3b2-716c-4392-b3c8-75b2168ccfb7/volumes/kubernetes.io~projected/kube-api-access-gh6kl:{mountpoint:/var/lib/kubelet/pods/14f6e3b2-716c-4392-b3c8-75b2168ccfb7/volumes/kubernetes.io~projected/kube-api-access-gh6kl major:0 minor:579 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14f6e3b2-716c-4392-b3c8-75b2168ccfb7/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/14f6e3b2-716c-4392-b3c8-75b2168ccfb7/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1c12a5d5-711f-4663-974c-c4b06e15fc39/volumes/kubernetes.io~projected/kube-api-access-cg69z:{mountpoint:/var/lib/kubelet/pods/1c12a5d5-711f-4663-974c-c4b06e15fc39/volumes/kubernetes.io~projected/kube-api-access-cg69z major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1c12a5d5-711f-4663-974c-c4b06e15fc39/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/1c12a5d5-711f-4663-974c-c4b06e15fc39/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/volumes/kubernetes.io~projected/kube-api-access-8rfpp:{mountpoint:/var/lib/kubelet/pods/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/volumes/kubernetes.io~projected/kube-api-access-8rfpp major:0 minor:99 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d72d950-cfb4-4ed5-9ad6-f7266b937493/volumes/kubernetes.io~projected/kube-api-access-h9cbm:{mountpoint:/var/lib/kubelet/pods/1d72d950-cfb4-4ed5-9ad6-f7266b937493/volumes/kubernetes.io~projected/kube-api-access-h9cbm major:0 minor:479 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d72d950-cfb4-4ed5-9ad6-f7266b937493/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/1d72d950-cfb4-4ed5-9ad6-f7266b937493/volumes/kubernetes.io~secret/encryption-config major:0 minor:478 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d72d950-cfb4-4ed5-9ad6-f7266b937493/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/1d72d950-cfb4-4ed5-9ad6-f7266b937493/volumes/kubernetes.io~secret/etcd-client major:0 minor:476 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d72d950-cfb4-4ed5-9ad6-f7266b937493/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1d72d950-cfb4-4ed5-9ad6-f7266b937493/volumes/kubernetes.io~secret/serving-cert major:0 minor:477 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1edde4bf-4554-4ab2-b588-513ad84a9bae/volumes/kubernetes.io~projected/kube-api-access-kxkl8:{mountpoint:/var/lib/kubelet/pods/1edde4bf-4554-4ab2-b588-513ad84a9bae/volumes/kubernetes.io~projected/kube-api-access-kxkl8 major:0 minor:802 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1edde4bf-4554-4ab2-b588-513ad84a9bae/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/1edde4bf-4554-4ab2-b588-513ad84a9bae/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:798 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1edde4bf-4554-4ab2-b588-513ad84a9bae/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/1edde4bf-4554-4ab2-b588-513ad84a9bae/volumes/kubernetes.io~secret/webhook-cert major:0 minor:801 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2195f7be-b41e-4ae2-b737-d5782e0d41a8/volumes/kubernetes.io~projected/kube-api-access-r657p:{mountpoint:/var/lib/kubelet/pods/2195f7be-b41e-4ae2-b737-d5782e0d41a8/volumes/kubernetes.io~projected/kube-api-access-r657p major:0 minor:1001 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/257a4a8b-014c-4473-80a0-e95cf6d41bf1/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/257a4a8b-014c-4473-80a0-e95cf6d41bf1/volumes/kubernetes.io~projected/ca-certs major:0 minor:480 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/257a4a8b-014c-4473-80a0-e95cf6d41bf1/volumes/kubernetes.io~projected/kube-api-access-hzv5v:{mountpoint:/var/lib/kubelet/pods/257a4a8b-014c-4473-80a0-e95cf6d41bf1/volumes/kubernetes.io~projected/kube-api-access-hzv5v major:0 minor:481 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/257a4a8b-014c-4473-80a0-e95cf6d41bf1/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/257a4a8b-014c-4473-80a0-e95cf6d41bf1/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:526 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6/volumes/kubernetes.io~projected/kube-api-access-m7v6s:{mountpoint:/var/lib/kubelet/pods/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6/volumes/kubernetes.io~projected/kube-api-access-m7v6s major:0 minor:747 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6/volumes/kubernetes.io~secret/proxy-tls major:0 minor:746 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/volumes/kubernetes.io~projected/kube-api-access-lpdlr:{mountpoint:/var/lib/kubelet/pods/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/volumes/kubernetes.io~projected/kube-api-access-lpdlr major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a05e72d-836f-40e0-8a5c-ee02dce494b3/volumes/kubernetes.io~projected/kube-api-access-qdb2x:{mountpoint:/var/lib/kubelet/pods/2a05e72d-836f-40e0-8a5c-ee02dce494b3/volumes/kubernetes.io~projected/kube-api-access-qdb2x major:0 minor:633 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2afe3890-e844-4dd3-ba49-3ac9178549bf/volumes/kubernetes.io~projected/kube-api-access-d84xk:{mountpoint:/var/lib/kubelet/pods/2afe3890-e844-4dd3-ba49-3ac9178549bf/volumes/kubernetes.io~projected/kube-api-access-d84xk major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2afe3890-e844-4dd3-ba49-3ac9178549bf/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/2afe3890-e844-4dd3-ba49-3ac9178549bf/volumes/kubernetes.io~secret/srv-cert major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8/volumes/kubernetes.io~projected/kube-api-access-qdg6f:{mountpoint:/var/lib/kubelet/pods/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8/volumes/kubernetes.io~projected/kube-api-access-qdg6f major:0 minor:1071 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1068 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1069 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37b2e803-302b-4650-b18f-d3d2dd703bd5/volumes/kubernetes.io~projected/kube-api-access-hp2qn:{mountpoint:/var/lib/kubelet/pods/37b2e803-302b-4650-b18f-d3d2dd703bd5/volumes/kubernetes.io~projected/kube-api-access-hp2qn major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37b2e803-302b-4650-b18f-d3d2dd703bd5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/37b2e803-302b-4650-b18f-d3d2dd703bd5/volumes/kubernetes.io~secret/serving-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ff2ab1c-7057-4e18-8e32-68807f86532a/volumes/kubernetes.io~projected/kube-api-access-8c4rc:{mountpoint:/var/lib/kubelet/pods/3ff2ab1c-7057-4e18-8e32-68807f86532a/volumes/kubernetes.io~projected/kube-api-access-8c4rc major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ff2ab1c-7057-4e18-8e32-68807f86532a/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/3ff2ab1c-7057-4e18-8e32-68807f86532a/volumes/kubernetes.io~secret/metrics-tls major:0 minor:454 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b/volumes/kubernetes.io~projected/kube-api-access-tdgld:{mountpoint:/var/lib/kubelet/pods/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b/volumes/kubernetes.io~projected/kube-api-access-tdgld major:0 minor:475 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b/volumes/kubernetes.io~secret/encryption-config major:0 minor:473 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b/volumes/kubernetes.io~secret/etcd-client major:0 minor:474 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b/volumes/kubernetes.io~secret/serving-cert major:0 minor:486 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/42b4d53c-af72-44c8-9605-271445f95f87/volumes/kubernetes.io~projected/kube-api-access-kjcjm:{mountpoint:/var/lib/kubelet/pods/42b4d53c-af72-44c8-9605-271445f95f87/volumes/kubernetes.io~projected/kube-api-access-kjcjm major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/42b4d53c-af72-44c8-9605-271445f95f87/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/42b4d53c-af72-44c8-9605-271445f95f87/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:455 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/42b4d53c-af72-44c8-9605-271445f95f87/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/42b4d53c-af72-44c8-9605-271445f95f87/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:448 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/484e6d0b-d057-4658-8e49-bbe7e6f6ee86/volumes/kubernetes.io~projected/kube-api-access-qbdwm:{mountpoint:/var/lib/kubelet/pods/484e6d0b-d057-4658-8e49-bbe7e6f6ee86/volumes/kubernetes.io~projected/kube-api-access-qbdwm major:0 minor:735 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/484e6d0b-d057-4658-8e49-bbe7e6f6ee86/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/484e6d0b-d057-4658-8e49-bbe7e6f6ee86/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:734 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/48f99840-4d9e-49c5-819e-0bb15493feb5/volumes/kubernetes.io~projected/kube-api-access-mb5l4:{mountpoint:/var/lib/kubelet/pods/48f99840-4d9e-49c5-819e-0bb15493feb5/volumes/kubernetes.io~projected/kube-api-access-mb5l4 major:0 minor:829 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/48f99840-4d9e-49c5-819e-0bb15493feb5/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/48f99840-4d9e-49c5-819e-0bb15493feb5/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:828 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16/volumes/kubernetes.io~projected/kube-api-access-rvvhh:{mountpoint:/var/lib/kubelet/pods/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16/volumes/kubernetes.io~projected/kube-api-access-rvvhh major:0 minor:720 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4d5479f3-51ec-4b93-8188-21cdda44828d/volumes/kubernetes.io~projected/kube-api-access-j6xlb:{mountpoint:/var/lib/kubelet/pods/4d5479f3-51ec-4b93-8188-21cdda44828d/volumes/kubernetes.io~projected/kube-api-access-j6xlb major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4d5479f3-51ec-4b93-8188-21cdda44828d/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/4d5479f3-51ec-4b93-8188-21cdda44828d/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:437 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4df756f0-c6b6-4730-842a-7ee9227397ae/volumes/kubernetes.io~projected/kube-api-access-8k4c5:{mountpoint:/var/lib/kubelet/pods/4df756f0-c6b6-4730-842a-7ee9227397ae/volumes/kubernetes.io~projected/kube-api-access-8k4c5 major:0 minor:1035 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4df756f0-c6b6-4730-842a-7ee9227397ae/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/4df756f0-c6b6-4730-842a-7ee9227397ae/volumes/kubernetes.io~secret/certs major:0 minor:1027 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4df756f0-c6b6-4730-842a-7ee9227397ae/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/4df756f0-c6b6-4730-842a-7ee9227397ae/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1026 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e6ecc16-19cb-4b66-801f-b958b10d0ce7/volumes/kubernetes.io~projected/kube-api-access-gn8w5:{mountpoint:/var/lib/kubelet/pods/4e6ecc16-19cb-4b66-801f-b958b10d0ce7/volumes/kubernetes.io~projected/kube-api-access-gn8w5 major:0 minor:749 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e6ecc16-19cb-4b66-801f-b958b10d0ce7/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/4e6ecc16-19cb-4b66-801f-b958b10d0ce7/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:748 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5448b59a-b731-45a3-9ded-d25315f597fb/volumes/kubernetes.io~projected/kube-api-access-d8kvd:{mountpoint:/var/lib/kubelet/pods/5448b59a-b731-45a3-9ded-d25315f597fb/volumes/kubernetes.io~projected/kube-api-access-d8kvd major:0 minor:1065 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5448b59a-b731-45a3-9ded-d25315f597fb/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/5448b59a-b731-45a3-9ded-d25315f597fb/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1060 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5448b59a-b731-45a3-9ded-d25315f597fb/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/5448b59a-b731-45a3-9ded-d25315f597fb/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1064 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/549bd192-0235-4994-b485-f1b10d16f6b5/volumes/kubernetes.io~projected/kube-api-access-pwqp6:{mountpoint:/var/lib/kubelet/pods/549bd192-0235-4994-b485-f1b10d16f6b5/volumes/kubernetes.io~projected/kube-api-access-pwqp6 major:0 minor:426 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/549bd192-0235-4994-b485-f1b10d16f6b5/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/549bd192-0235-4994-b485-f1b10d16f6b5/volumes/kubernetes.io~secret/signing-key major:0 minor:425 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~projected/kube-api-access-grplv:{mountpoint:/var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~projected/kube-api-access-grplv major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~secret/etcd-client major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5843b0d4-a538-4261-b425-598e318c9d07/volumes/kubernetes.io~projected/kube-api-access-r6nnz:{mountpoint:/var/lib/kubelet/pods/5843b0d4-a538-4261-b425-598e318c9d07/volumes/kubernetes.io~projected/kube-api-access-r6nnz major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5aa507cf-017d-44f5-8662-77547f82fb51/volumes/kubernetes.io~projected/kube-api-access-jt6sd:{mountpoint:/var/lib/kubelet/pods/5aa507cf-017d-44f5-8662-77547f82fb51/volumes/kubernetes.io~projected/kube-api-access-jt6sd major:0 minor:719 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b796628-a6ca-4d5c-9870-0ca60b9372aa/volumes/kubernetes.io~projected/kube-api-access-48nns:{mountpoint:/var/lib/kubelet/pods/5b796628-a6ca-4d5c-9870-0ca60b9372aa/volumes/kubernetes.io~projected/kube-api-access-48nns major:0 minor:1070 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b796628-a6ca-4d5c-9870-0ca60b9372aa/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/5b796628-a6ca-4d5c-9870-0ca60b9372aa/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1067 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b796628-a6ca-4d5c-9870-0ca60b9372aa/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/5b796628-a6ca-4d5c-9870-0ca60b9372aa/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1066 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ed5e77b-948b-4d94-ac9f-440ee3c07e18/volumes/kubernetes.io~projected/kube-api-access-22bwx:{mountpoint:/var/lib/kubelet/pods/5ed5e77b-948b-4d94-ac9f-440ee3c07e18/volumes/kubernetes.io~projected/kube-api-access-22bwx major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ed5e77b-948b-4d94-ac9f-440ee3c07e18/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5ed5e77b-948b-4d94-ac9f-440ee3c07e18/volumes/kubernetes.io~secret/serving-cert major:0 minor:302 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/60e17cd1-c520-4d8d-8c72-47bf73b8cc66/volumes/kubernetes.io~projected/kube-api-access-xg9zz:{mountpoint:/var/lib/kubelet/pods/60e17cd1-c520-4d8d-8c72-47bf73b8cc66/volumes/kubernetes.io~projected/kube-api-access-xg9zz major:0 minor:868 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/60e17cd1-c520-4d8d-8c72-47bf73b8cc66/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/60e17cd1-c520-4d8d-8c72-47bf73b8cc66/volumes/kubernetes.io~secret/proxy-tls major:0 minor:863 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6622be09-206e-4d02-90ca-6d9f2fc852aa/volumes/kubernetes.io~projected/kube-api-access-lqg6g:{mountpoint:/var/lib/kubelet/pods/6622be09-206e-4d02-90ca-6d9f2fc852aa/volumes/kubernetes.io~projected/kube-api-access-lqg6g major:0 minor:374 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/66f49a19-0e3b-4611-b8a6-5f5687fa20b6/volumes/kubernetes.io~projected/kube-api-access-knkb7:{mountpoint:/var/lib/kubelet/pods/66f49a19-0e3b-4611-b8a6-5f5687fa20b6/volumes/kubernetes.io~projected/kube-api-access-knkb7 major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/66f49a19-0e3b-4611-b8a6-5f5687fa20b6/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/66f49a19-0e3b-4611-b8a6-5f5687fa20b6/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:445 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ed47c57-533f-43e4-88eb-07da29b4878f/volumes/kubernetes.io~projected/kube-api-access-rjk5l:{mountpoint:/var/lib/kubelet/pods/6ed47c57-533f-43e4-88eb-07da29b4878f/volumes/kubernetes.io~projected/kube-api-access-rjk5l major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ed47c57-533f-43e4-88eb-07da29b4878f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6ed47c57-533f-43e4-88eb-07da29b4878f/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~projected/kube-api-access-qd2mn:{mountpoint:/var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~projected/kube-api-access-qd2mn major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~secret/metrics-tls major:0 minor:442 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/79bb87a4-8834-4c73-834e-356ccc1f7f9b/volumes/kubernetes.io~projected/kube-api-access-56qz6:{mountpoint:/var/lib/kubelet/pods/79bb87a4-8834-4c73-834e-356ccc1f7f9b/volumes/kubernetes.io~projected/kube-api-access-56qz6 major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/79bb87a4-8834-4c73-834e-356ccc1f7f9b/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/79bb87a4-8834-4c73-834e-356ccc1f7f9b/volumes/kubernetes.io~secret/metrics-certs major:0 minor:447 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/803de28e-3b31-4ea2-9b97-87a733635a5c/volumes/kubernetes.io~projected/kube-api-access-gchrx:{mountpoint:/var/lib/kubelet/pods/803de28e-3b31-4ea2-9b97-87a733635a5c/volumes/kubernetes.io~projected/kube-api-access-gchrx major:0 minor:307 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/866cf034-8fd8-4f16-8e9b-68627228aa8d/volumes/kubernetes.io~projected/kube-api-access-mnrlx:{mountpoint:/var/lib/kubelet/pods/866cf034-8fd8-4f16-8e9b-68627228aa8d/volumes/kubernetes.io~projected/kube-api-access-mnrlx major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86774fd7-7c26-4b41-badb-de1004397637/volumes/kubernetes.io~projected/kube-api-access-tfxm5:{mountpoint:/var/lib/kubelet/pods/86774fd7-7c26-4b41-badb-de1004397637/volumes/kubernetes.io~projected/kube-api-access-tfxm5 major:0 minor:757 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86774fd7-7c26-4b41-badb-de1004397637/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/86774fd7-7c26-4b41-badb-de1004397637/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:754 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba/volumes/kub Mar 13 10:56:56.620850 master-0 kubenswrapper[33013]: ernetes.io~projected/kube-api-access major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a305f45-8689-45a8-8c8b-5954f2c863df/volumes/kubernetes.io~projected/kube-api-access-zp6pp:{mountpoint:/var/lib/kubelet/pods/8a305f45-8689-45a8-8c8b-5954f2c863df/volumes/kubernetes.io~projected/kube-api-access-zp6pp major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a305f45-8689-45a8-8c8b-5954f2c863df/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/8a305f45-8689-45a8-8c8b-5954f2c863df/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:446 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~projected/kube-api-access-m5vcv:{mountpoint:/var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~projected/kube-api-access-m5vcv major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:453 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f9db15a-8854-485b-9863-9cbe5dddd977/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/8f9db15a-8854-485b-9863-9cbe5dddd977/volumes/kubernetes.io~projected/kube-api-access major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f9db15a-8854-485b-9863-9cbe5dddd977/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8f9db15a-8854-485b-9863-9cbe5dddd977/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~projected/kube-api-access-t2q2f:{mountpoint:/var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~projected/kube-api-access-t2q2f major:0 minor:696 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~secret/federate-client-tls:{mountpoint:/var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~secret/federate-client-tls major:0 minor:686 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~secret/secret-telemeter-client:{mountpoint:/var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~secret/secret-telemeter-client major:0 minor:608 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config major:0 minor:63 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~secret/telemeter-client-tls:{mountpoint:/var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~secret/telemeter-client-tls major:0 minor:594 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9aa4b44d-f202-4670-afab-44b38960026f/volumes/kubernetes.io~projected/kube-api-access-bjvtr:{mountpoint:/var/lib/kubelet/pods/9aa4b44d-f202-4670-afab-44b38960026f/volumes/kubernetes.io~projected/kube-api-access-bjvtr major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d8af021-f20f-48a2-8b2a-3a5a3f37237f/volumes/kubernetes.io~projected/kube-api-access-dzxzs:{mountpoint:/var/lib/kubelet/pods/9d8af021-f20f-48a2-8b2a-3a5a3f37237f/volumes/kubernetes.io~projected/kube-api-access-dzxzs major:0 minor:782 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d8af021-f20f-48a2-8b2a-3a5a3f37237f/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/9d8af021-f20f-48a2-8b2a-3a5a3f37237f/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:772 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d8af021-f20f-48a2-8b2a-3a5a3f37237f/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/9d8af021-f20f-48a2-8b2a-3a5a3f37237f/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:773 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9da11462-a91d-4d02-8614-78b4c5b2f7e2/volumes/kubernetes.io~projected/kube-api-access-hp847:{mountpoint:/var/lib/kubelet/pods/9da11462-a91d-4d02-8614-78b4c5b2f7e2/volumes/kubernetes.io~projected/kube-api-access-hp847 major:0 minor:780 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9da11462-a91d-4d02-8614-78b4c5b2f7e2/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/9da11462-a91d-4d02-8614-78b4c5b2f7e2/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:776 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1a998af-4fc0-4078-a6a0-93dde6c00508/volumes/kubernetes.io~projected/kube-api-access-p29zg:{mountpoint:/var/lib/kubelet/pods/a1a998af-4fc0-4078-a6a0-93dde6c00508/volumes/kubernetes.io~projected/kube-api-access-p29zg major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1a998af-4fc0-4078-a6a0-93dde6c00508/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a1a998af-4fc0-4078-a6a0-93dde6c00508/volumes/kubernetes.io~secret/serving-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc/volumes/kubernetes.io~projected/kube-api-access-f5656:{mountpoint:/var/lib/kubelet/pods/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc/volumes/kubernetes.io~projected/kube-api-access-f5656 major:0 minor:586 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc/volumes/kubernetes.io~secret/serving-cert major:0 minor:580 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1003 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b10584c2-ef04-4649-bcb6-9222c9530c3f/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/b10584c2-ef04-4649-bcb6-9222c9530c3f/volumes/kubernetes.io~projected/ca-certs major:0 minor:484 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b10584c2-ef04-4649-bcb6-9222c9530c3f/volumes/kubernetes.io~projected/kube-api-access-zsswm:{mountpoint:/var/lib/kubelet/pods/b10584c2-ef04-4649-bcb6-9222c9530c3f/volumes/kubernetes.io~projected/kube-api-access-zsswm major:0 minor:482 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b12e76f4-b960-4534-90e6-a2cdbecd1728/volumes/kubernetes.io~projected/kube-api-access-xq9dl:{mountpoint:/var/lib/kubelet/pods/b12e76f4-b960-4534-90e6-a2cdbecd1728/volumes/kubernetes.io~projected/kube-api-access-xq9dl major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b68ed803-45e2-42f1-99b1-33cf59b01d74/volumes/kubernetes.io~projected/kube-api-access-q5hq9:{mountpoint:/var/lib/kubelet/pods/b68ed803-45e2-42f1-99b1-33cf59b01d74/volumes/kubernetes.io~projected/kube-api-access-q5hq9 major:0 minor:1129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b68ed803-45e2-42f1-99b1-33cf59b01d74/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/b68ed803-45e2-42f1-99b1-33cf59b01d74/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b68ed803-45e2-42f1-99b1-33cf59b01d74/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/b68ed803-45e2-42f1-99b1-33cf59b01d74/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b68ed803-45e2-42f1-99b1-33cf59b01d74/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/b68ed803-45e2-42f1-99b1-33cf59b01d74/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8d40b37-0f3d-4531-9fa8-eda965d2337d/volumes/kubernetes.io~projected/kube-api-access-l5rht:{mountpoint:/var/lib/kubelet/pods/b8d40b37-0f3d-4531-9fa8-eda965d2337d/volumes/kubernetes.io~projected/kube-api-access-l5rht major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8d40b37-0f3d-4531-9fa8-eda965d2337d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/b8d40b37-0f3d-4531-9fa8-eda965d2337d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volumes/kubernetes.io~projected/kube-api-access-vxvqn:{mountpoint:/var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volumes/kubernetes.io~projected/kube-api-access-vxvqn major:0 minor:152 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/beee81ef-5a3a-4df2-85d5-2573679d261f/volumes/kubernetes.io~projected/kube-api-access-f8q5s:{mountpoint:/var/lib/kubelet/pods/beee81ef-5a3a-4df2-85d5-2573679d261f/volumes/kubernetes.io~projected/kube-api-access-f8q5s major:0 minor:731 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c/volumes/kubernetes.io~projected/kube-api-access-j25nl:{mountpoint:/var/lib/kubelet/pods/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c/volumes/kubernetes.io~projected/kube-api-access-j25nl major:0 minor:885 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:816 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c09f42db-e6d7-469d-9761-88a879f6aa6b/volumes/kubernetes.io~projected/kube-api-access-mcb99:{mountpoint:/var/lib/kubelet/pods/c09f42db-e6d7-469d-9761-88a879f6aa6b/volumes/kubernetes.io~projected/kube-api-access-mcb99 major:0 minor:585 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c09f42db-e6d7-469d-9761-88a879f6aa6b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c09f42db-e6d7-469d-9761-88a879f6aa6b/volumes/kubernetes.io~secret/serving-cert major:0 minor:1172 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c455a959-d764-4b4f-a1e0-95c27495dd9d/volumes/kubernetes.io~projected/kube-api-access-2cpdn:{mountpoint:/var/lib/kubelet/pods/c455a959-d764-4b4f-a1e0-95c27495dd9d/volumes/kubernetes.io~projected/kube-api-access-2cpdn major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c455a959-d764-4b4f-a1e0-95c27495dd9d/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/c455a959-d764-4b4f-a1e0-95c27495dd9d/volumes/kubernetes.io~secret/srv-cert major:0 minor:443 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c87545aa-11c2-4e6e-8c13-16eeff3be83b/volumes/kubernetes.io~projected/kube-api-access-pwfzq:{mountpoint:/var/lib/kubelet/pods/c87545aa-11c2-4e6e-8c13-16eeff3be83b/volumes/kubernetes.io~projected/kube-api-access-pwfzq major:0 minor:778 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c87545aa-11c2-4e6e-8c13-16eeff3be83b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c87545aa-11c2-4e6e-8c13-16eeff3be83b/volumes/kubernetes.io~secret/serving-cert major:0 minor:775 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d0f42a72-24c7-49e6-8edb-97b2b0d6183a/volumes/kubernetes.io~projected/kube-api-access-26dtr:{mountpoint:/var/lib/kubelet/pods/d0f42a72-24c7-49e6-8edb-97b2b0d6183a/volumes/kubernetes.io~projected/kube-api-access-26dtr major:0 minor:820 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d0f42a72-24c7-49e6-8edb-97b2b0d6183a/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/d0f42a72-24c7-49e6-8edb-97b2b0d6183a/volumes/kubernetes.io~secret/proxy-tls major:0 minor:815 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d288e5d0-0976-477f-be14-b3d5828e0482/volumes/kubernetes.io~projected/kube-api-access-5k8rp:{mountpoint:/var/lib/kubelet/pods/d288e5d0-0976-477f-be14-b3d5828e0482/volumes/kubernetes.io~projected/kube-api-access-5k8rp major:0 minor:364 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9075a44-22d3-4562-819e-d5a92f013663/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/d9075a44-22d3-4562-819e-d5a92f013663/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:537 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9075a44-22d3-4562-819e-d5a92f013663/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/d9075a44-22d3-4562-819e-d5a92f013663/volumes/kubernetes.io~empty-dir/tmp major:0 minor:536 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9075a44-22d3-4562-819e-d5a92f013663/volumes/kubernetes.io~projected/kube-api-access-htqw9:{mountpoint:/var/lib/kubelet/pods/d9075a44-22d3-4562-819e-d5a92f013663/volumes/kubernetes.io~projected/kube-api-access-htqw9 major:0 minor:539 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33/volumes/kubernetes.io~projected/kube-api-access-bt7hs:{mountpoint:/var/lib/kubelet/pods/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33/volumes/kubernetes.io~projected/kube-api-access-bt7hs major:0 minor:777 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33/volumes/kubernetes.io~secret/cert major:0 minor:774 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e485e709-32ba-442b-98e5-b4073516c0ab/volumes/kubernetes.io~projected/kube-api-access-qwc4l:{mountpoint:/var/lib/kubelet/pods/e485e709-32ba-442b-98e5-b4073516c0ab/volumes/kubernetes.io~projected/kube-api-access-qwc4l major:0 minor:543 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eb778c86-ea51-4eab-82b8-a8e0bec0f050/volumes/kubernetes.io~projected/kube-api-access-hkdfn:{mountpoint:/var/lib/kubelet/pods/eb778c86-ea51-4eab-82b8-a8e0bec0f050/volumes/kubernetes.io~projected/kube-api-access-hkdfn major:0 minor:1002 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eb778c86-ea51-4eab-82b8-a8e0bec0f050/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/eb778c86-ea51-4eab-82b8-a8e0bec0f050/volumes/kubernetes.io~secret/default-certificate major:0 minor:992 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eb778c86-ea51-4eab-82b8-a8e0bec0f050/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/eb778c86-ea51-4eab-82b8-a8e0bec0f050/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1000 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eb778c86-ea51-4eab-82b8-a8e0bec0f050/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/eb778c86-ea51-4eab-82b8-a8e0bec0f050/volumes/kubernetes.io~secret/stats-auth major:0 minor:993 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee/volumes/kubernetes.io~projected/kube-api-access-s2znn:{mountpoint:/var/lib/kubelet/pods/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee/volumes/kubernetes.io~projected/kube-api-access-s2znn major:0 minor:766 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:765 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec3168fc-6c8f-4603-94e0-17b1ae22a802/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ec3168fc-6c8f-4603-94e0-17b1ae22a802/volumes/kubernetes.io~projected/kube-api-access major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec3168fc-6c8f-4603-94e0-17b1ae22a802/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ec3168fc-6c8f-4603-94e0-17b1ae22a802/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f87662b9-6ac6-44f3-8a16-ff858c2baa91/volumes/kubernetes.io~projected/kube-api-access-zk4sg:{mountpoint:/var/lib/kubelet/pods/f87662b9-6ac6-44f3-8a16-ff858c2baa91/volumes/kubernetes.io~projected/kube-api-access-zk4sg major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f87662b9-6ac6-44f3-8a16-ff858c2baa91/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/f87662b9-6ac6-44f3-8a16-ff858c2baa91/volumes/kubernetes.io~secret/webhook-cert major:0 minor:139 fsType:tmpfs blockSize:0} overlay_0-1008:{mountpoint:/var/lib/containers/storage/overlay/2a30464099175fd5f2b144794c5bee2c69b66de1bea00d044a583aa7e3b10845/merged major:0 minor:1008 fsType:overlay blockSize:0} overlay_0-1012:{mountpoint:/var/lib/containers/storage/overlay/a597f624c4f834b51be96657cc2617b40905534417b0322206e1ce0eb32425c8/merged major:0 minor:1012 fsType:overlay blockSize:0} overlay_0-1014:{mountpoint:/var/lib/containers/storage/overlay/24729b36bc32e83807500c259c10a4e8bdf8a591e37a9493eab5d8fe4acca36d/merged major:0 minor:1014 fsType:overlay blockSize:0} overlay_0-1017:{mountpoint:/var/lib/containers/storage/overlay/52792db47d8cfdebde6016e5f546a669227b74671378051ac7c6b49073607a7a/merged major:0 minor:1017 fsType:overlay blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/15cd2fd63cd156ecd3094d86f3104f9db079b84990e8d39b136313a4d00d0169/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-1023:{mountpoint:/var/lib/containers/storage/overlay/94811f1121feb077acf0d724e9d089acb4b0e8bb29b003af703cfe9d8b02f8cd/merged major:0 minor:1023 fsType:overlay blockSize:0} overlay_0-1024:{mountpoint:/var/lib/containers/storage/overlay/c19a1b8fa6508c63b0162cefa2e0a8f4cef075b5e6550d004383b762489ef10d/merged major:0 minor:1024 fsType:overlay blockSize:0} overlay_0-1038:{mountpoint:/var/lib/containers/storage/overlay/7906426a1686030a322d13c911f14e846caa9abd52dc973cb33204f220851101/merged major:0 minor:1038 fsType:overlay blockSize:0} overlay_0-1040:{mountpoint:/var/lib/containers/storage/overlay/da4ef76035fedd9f4afa5bcb1b9fad881689d5ce601a451b4940a6f65e045e1f/merged major:0 minor:1040 fsType:overlay blockSize:0} overlay_0-1049:{mountpoint:/var/lib/containers/storage/overlay/da48a21ff11f56fddf5ce29890905305bdbf8902629221eab5c6945555f115ca/merged major:0 minor:1049 fsType:overlay blockSize:0} overlay_0-1054:{mountpoint:/var/lib/containers/storage/overlay/54462f525ef6dedb0315c4790a03f3e93380980b185aa6050294c06e47404ed3/merged major:0 minor:1054 fsType:overlay blockSize:0} overlay_0-1076:{mountpoint:/var/lib/containers/storage/overlay/7557062294edd5438e28ce56da882d4a120ab5524cd98709555cba799e901a78/merged major:0 minor:1076 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/efef037baa3b2aca6c932fef29e87cd081e7d9f8a666c289364e342bc82e16ec/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-1080:{mountpoint:/var/lib/containers/storage/overlay/e63e7df44457303a8d5e705a40b861fa68d1cc2feefbbdf7386d3d186cc262f1/merged major:0 minor:1080 fsType:overlay blockSize:0} overlay_0-1082:{mountpoint:/var/lib/containers/storage/overlay/648a102a07e9e76463510c0cf987fed4ee490b618462aad653ae0f9a78805891/merged major:0 minor:1082 fsType:overlay blockSize:0} overlay_0-1084:{mountpoint:/var/lib/containers/storage/overlay/8e633941b9e94c07114a2efdd1712f08771082556717a5312067107136fafe7e/merged major:0 minor:1084 fsType:overlay blockSize:0} overlay_0-1085:{mountpoint:/var/lib/containers/storage/overlay/3f22345119cb9cec346c1f51b3239b87fb05ae20c01e1a310563b4a0cba4460d/merged major:0 minor:1085 fsType:overlay blockSize:0} overlay_0-1092:{mountpoint:/var/lib/containers/storage/overlay/b66337a9cbe9a134333a3e1cfc2d6f0735908bff01b24894641a44f9da2e8b6b/merged major:0 minor:1092 fsType:overlay blockSize:0} overlay_0-1097:{mountpoint:/var/lib/containers/storage/overlay/25a74067d6b9ba88d2c3a2f34b1189e06ddfea3f89922cceda7bb6cb276a4906/merged major:0 minor:1097 fsType:overlay blockSize:0} overlay_0-1099:{mountpoint:/var/lib/containers/storage/overlay/38d335ee430d93f10df8493f182949539d9d69008e9d9d3c083975009d0269c9/merged major:0 minor:1099 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/4195a7566e42a7504423214634802e6d7dc75988438d40dc94260a7bf0c0c63d/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-1100:{mountpoint:/var/lib/containers/storage/overlay/2d29873e5459c6f4e34c7d54b2a0ffd4965c55fe7f9c21d78631da131b85c7d9/merged major:0 minor:1100 fsType:overlay blockSize:0} overlay_0-1103:{mountpoint:/var/lib/containers/storage/overlay/4c9f615e225812fce5fe0f27b6f34f7c08282bade0fbbd9a1adc98aebd33e08a/merged major:0 minor:1103 fsType:overlay blockSize:0} overlay_0-1115:{mountpoint:/var/lib/containers/storage/overlay/b0d8988df2ccf03b4667bacae51916a7b5467c38e7cb446fb4ec2b86d9472918/merged major:0 minor:1115 fsType:overlay blockSize:0} overlay_0-1132:{mountpoint:/var/lib/containers/storage/overlay/c6cb36d4600ee8d2023e6e9b0ade57a0854254da1e27a4fd830619b5e1b53b28/merged major:0 minor:1132 fsType:overlay blockSize:0} overlay_0-1134:{mountpoint:/var/lib/containers/storage/overlay/cbf201503280dfd26b063e523d31cfbc9cc35d7c4673cd810a225687c69229cb/merged major:0 minor:1134 fsType:overlay blockSize:0} overlay_0-1141:{mountpoint:/var/lib/containers/storage/overlay/bd7b5344bd42e192ebd145c1e3d00fda5673e97d9d013587f7ffab9caf89b41b/merged major:0 minor:1141 fsType:overlay blockSize:0} overlay_0-1156:{mountpoint:/var/lib/containers/storage/overlay/b8f90098b633711947af1d67811063e7493e548e04ab1aa2b2ec31922cc93793/merged major:0 minor:1156 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/788a52383ea7cd6022311464850915f5d2d0a4868e1dc00beb1637ef79c43539/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1163:{mountpoint:/var/lib/containers/storage/overlay/17dfae88b30013f767b6aed8016c0db16fbdea561006e3d5b1885967d2cb8197/merged major:0 minor:1163 fsType:overlay blockSize:0} overlay_0-1180:{mountpoint:/var/lib/containers/storage/overlay/4c8d79fc022b1d54444be8f16b681dee3387e166a8ddedcb12b2b50ece500ce5/merged major:0 minor:1180 fsType:overlay blockSize:0} overlay_0-1183:{mountpoint:/var/lib/containers/storage/overlay/0ee5b2873e38f713be669b7e3d744c0907e76153a5e6dd9b6803d58ff8f0f325/merged major:0 minor:1183 fsType:overlay blockSize:0} overlay_0-1185:{mountpoint:/var/lib/containers/storage/overlay/31c8ab805f2dbabfb3b46a9422c30b99b9dc3dd813e0ebface16e6dc288b7418/merged major:0 minor:1185 fsType:overlay blockSize:0} overlay_0-1187:{mountpoint:/var/lib/containers/storage/overlay/e9a7114c2521f562313c885a3b896aea4a318cafee8c88a65bcd3063f50ecbc5/merged major:0 minor:1187 fsType:overlay blockSize:0} overlay_0-1191:{mountpoint:/var/lib/containers/storage/overlay/db917d82b9cd03bcd88748877b66f8a30d6b194c491444589d240e4e199a3ec4/merged major:0 minor:1191 fsType:overlay blockSize:0} overlay_0-1196:{mountpoint:/var/lib/containers/storage/overlay/84ab1d5c85c2d47cbe756d006a4201382554c84b4f2d32e6286c45aa655913d8/merged major:0 minor:1196 fsType:overlay blockSize:0} overlay_0-1203:{mountpoint:/var/lib/containers/storage/overlay/ed94761a4a1353bb6eee11e165d4ce3b207b6382dfe55cea068bcebf21b36ada/merged major:0 minor:1203 fsType:overlay blockSize:0} overlay_0-1205:{mountpoint:/var/lib/containers/storage/overlay/3ecb05cc15e6a8bcf78a2e6b75d0ee1a4b308a2ee499366017e0a11421feae5a/merged major:0 minor:1205 fsType:overlay blockSize:0} overlay_0-1209:{mountpoint:/var/lib/containers/storage/overlay/25c9557fecbd6d03d4490004bd419501b96b7224dc2080be85ae5c66dc4c0809/merged major:0 minor:1209 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/7e17db66c457e9895aad61f332432c5dd0af0963e1891fdf23b1ebd26585927d/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-1211:{mountpoint:/var/lib/containers/storage/overlay/eb300fa309e4491f57e39dda3c5b9229a2c53000f86b1ef54e8b9495664e8f4f/merged major:0 minor:1211 fsType:overlay blockSize:0} overlay_0-1222:{mountpoint:/var/lib/containers/storage/overlay/050251ce402932a659d091793ff3480d63fa8ffcd935011c4e3fabbb3410737b/merged major:0 minor:1222 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/bf554e2439fdf3ee56c8b4adcaa85968e90fb5ab701ccf41700c1426f7ee48e4/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/2add642a88e02f020308d0045430d6741693a305930c653bb02a1e27756bad73/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/d63b1aece93d203873e89d5604853a3e61c406a5344c7f8fa20fe29431471213/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/e0c6b822443411b9107d8404b82b86aa89cd61c9eb48b79c1259f732b7497dbe/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/c597e3becccf12c3a78ddd6ac9e75b1cb3e18ba7aa8ea84e9874ce88c7b9213e/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/c2833d0b0078324662b887eeb23c1ec63f5c878389e0e8323f2c32558da2a8e0/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/0aa5d6a4e568b33a699c8f27499bf423d0b566e5a9c65c3dc51c7f7592e527eb/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/f07e477f7dc5fda552916b3b44a46ef62ae939ae84da9ccb9110471c7fb00ab0/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-162:{mountpoint:/var/lib/containers/storage/overlay/31708b75edf4db97b66f9e86f70a8b5274b63b2e3a97d1153a8f11ea6c6012f9/merged major:0 minor:162 fsType:overlay blockSize:0} overlay_0-168:{mountpoint:/var/lib/containers/storage/overlay/0b1820f56e64ffb3a423207a4565934a47c34026d403fe9a978d2f0aae5d4829/merged major:0 minor:168 fsType:overlay blockSize:0} overlay_0-171:{mountpoint:/var/lib/containers/storage/overlay/201b6ad7ac915be992120a11e221509b118dd179cd242654f67b1aac27c97b84/merged major:0 minor:171 fsType:overlay blockSize:0} overlay_0-173:{mountpoint:/var/lib/containers/storage/overlay/c3f61698c74d9098b2a680f8cf60a90fc7411432511e9ecdb84d03490377c946/merged major:0 minor:173 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/03181025380e8c62e68a6a35467a5e8745d080a0fd540dc9ee4c567b880f3e47/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/2343ab0b0893c439477f6dcf82a8558fba897982d47e761fa8babc5b8f143d09/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/ec5561b70148038f2901ea58f74d16b1387c98a158b3054bf2dc5e015de0d9de/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/542b5cd9c5e95a84d4f365276792f329ed9c6a96b0a986f2ce5721a8bece6a24/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/feba0ff4f86be6d6979ed0b9558a7a29794b691328b803d19740952471d032b6/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/39b09c8e6c795ba871df501b357aead982bd05239f7c88cf5813b8e994f6fde2/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/41b3f03062ce5561443eb75a29b40ff983ed70bd2a397b5dbb415673ab41e9af/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-262:{mountpoint:/var/lib/containers/storage/overlay/d4367d22cb29a8e989bed91234ebff4bf1225a4d1b58fe54d4496d26bc721431/merged major:0 minor:262 fsType:overlay blockSize:0} overlay_0-266:{mountpoint:/var/lib/containers/storage/overlay/abe5a50d69e7f250d0dfe249c5fc909db06c3a3d4d79d2860360152aac251baf/merged major:0 minor:266 fsType:overlay blockSize:0} overlay_0-272:{mountpoint:/var/lib/containers/storage/overlay/91dd884f93ef7a68593d246f1901220282b9123b6007bca6b2dd054f9073eebd/merged major:0 minor:272 fsType:overlay blockSize:0} overlay_0-278:{mountpoint:/var/lib/containers/storage/overlay/1f133c8552c58fd89f515ca78e658ac759204ca1ffb475f78b354dee4a93905f/merged major:0 minor:278 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/f2151ec939cb5904ee34f88ca0e354dde34b24135df8afdce54d4ed0e52240be/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-284:{mountpoint:/var/lib/containers/storage/overlay/0d7baa00cc913c1f49d5ebc952cf16a26f23281ca3827bb7df6db40240a9c467/merged major:0 minor:284 fsType:overlay blockSize:0} overlay_0-286:{mountpoint:/var/lib/containers/storage/overlay/47c5f226446daf16de682350249a9878093b699df65e600ecd2fa09d852645a6/merged major:0 minor:286 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/013f5d067642c94b9197094c393116f7dabc8c09c3aa204876818087a4f3f848/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/2945ee8653d2765d7073c4df3699b917dc99e28e06ad04a27ca366dcdfb05882/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/8b7c4bbddcedff3b1be56be44f17102a2570ba3a2f224f0cbdf2ce438a978f7a/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/0279774a67033301b7f7ab66078b8f3be5116970d17b45e04f5a026715a32319/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/03e0fabd7cf8e6e72f9fab026c15b3f4f0275252a1b37c40f934150384c6e5f5/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-300:{mountpoint:/var/lib/containers/storage/overlay/4c5866b18fd27945da0f8a53206b74156a6610997a9124b391ac7fa6543f64d4/merged major:0 minor:300 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/02a9a54a92d7dc09a1901c853cc39f2a6ff51369cf1e2a19036a765139eaf92b/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-318:{mountpoint:/var/lib/containers/storage/overlay/4a8ff66cd121c6913ab992382da00f05f16eb1b3c88f2b6dad61779f9d57cdcd/merged major:0 minor:318 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/e8088ea1096576062108eb1ba60b81aab43e5c6c03ed7b21829420c342d9f0d6/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-344:{mountpoint:/var/lib/containers/storage/overlay/5c3774112e1c8db0592e37df80f7ad128561a148bffcb86f2ed812d050daf236/merged major:0 minor:344 fsType:overlay blockSize:0} overlay_0-349:{mountpoint:/var/lib/containers/storage/overlay/3ce48f7ca8bd04ab397001d279915085b7751e37f1da033c66c5317a234a4bd3/merged major:0 minor:349 fsType:overlay blockSize:0} overlay_0-353:{mountpoint:/var/lib/containers/storage/overlay/95ca868c54da32178f5bd5caacb4862a6f5139e31172114fb6ca42f53cb4ddbe/merged major:0 minor:353 fsType:overlay blockSize:0} overlay_0-354:{mountpoint:/var/lib/containers/storage/overlay/d36f586f9365f365121fd29f39038fc249e8e709977ff2bd16acf19aa79642ba/merged major:0 minor:354 fsType:overlay blockSize:0} overlay_0-360:{mountpoint:/var/lib/containers/storage/overlay/167dd8ef2c48f288ea4e13d2dac0d980d31143e05fcca1b7ac443fc13bfae932/merged major:0 minor:360 fsType:overlay blockSize:0} overlay_0-363:{mountpoint:/var/lib/containers/storage/overlay/8685c8ee01d7736983a652387f3344751e7df894bbca906c2cf09eb33aae5d79/merged major:0 minor:363 fsType:overlay blockSize:0} overlay_0-366:{mountpoint:/var/lib/containers/storage/overlay/ed7df976607cd7518f35286a23a5746aef5bf93b785753e90a508565c5c5542d/merged major:0 minor:366 fsType:overlay blockSize:0} overlay_0-372:{mountpoint:/var/lib/containers/storage/overlay/1a18a8b4d3b6586314dc67e6d3de2b1e1e861780015ec4135e3589549613f543/merged major:0 minor:372 fsType:overlay blockSize:0} overlay_0-376:{mountpoint:/var/lib/containers/storage/overlay/e3374024fdeecbdf81a86c26b96932ed4a49dbf83aff7d478960f646920d0666/merged major:0 minor:376 fsType:overlay blockSize:0} overlay_0-379:{mountpoint:/var/lib/containers/storage/overlay/8e3a195126d829c7d9136c0f7a8aba923cf013eba6d27c27f65a531fa19ebcd7/merged major:0 minor:379 fsType:overlay blockSize:0} overlay_0-381:{mountpoint:/var/lib/containers/storage/overlay/6fe36d93b60b130398756d81ef14977342d2e811208701b98c6f4a0b30d243da/merged major:0 minor:381 fsType:overlay blockSize:0} overlay_0-383:{mountpoint:/var/lib/containers/storage/overlay/f207212656328dc6ac637753c136434e940fa2452a22f4c687bc0aaa39197b8f/merged major:0 minor:383 fsType:overlay blockSize:0} overlay_0-384:{mountpoint:/var/lib/containers/storage/overlay/af1c65c27abbe9e2248ecdb7297804e95bda6f68f8cca7cf5c435ab462d084f3/merged major:0 minor:384 fsType:overlay blockSize:0} overlay_0-386:{mountpoint:/var/lib/containers/storage/overlay/a7610c6b72eebcf09b3c46e14100a6aca6c46ac13365f1c409ed2eecb818d8b1/merged major:0 minor:386 fsType:overlay blockSize:0} overlay_0-388:{mountpoint:/var/lib/containers/storage/overlay/b012692e4cee533da7e66ffa86f305d75b74a6faee2968bc02ae99daee44c9c1/merged major:0 minor:388 fsType:overlay blockSize:0} overlay_0-390:{mountpoint:/var/lib/containers/storage/overlay/3755b89bd4ad1dfb59e10dad44906e8aca54b4340fa2c5b2ac852cd0340047e8/merged major:0 minor:390 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/05cf57117c1156dd52d1f13ec916e6fe4dd6203b01a1369bfab98dabcedabfab/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-397:{mountpoint:/var/lib/containers/storage/overlay/1756c4a9955814ec1b8936ae6a07695773b4f8c30cced94edc6add4ae0d5c812/merged major:0 minor:397 fsType:overlay blockSize:0} overlay_0-401:{mountpoint:/var/lib/containers/storage/overlay/af55e55f6659e463bd594ff5f91487c2aba6cba8d951d137f60311b2f7a977c3/merged major:0 minor:401 fsType:overlay blockSize:0} overlay_0-408:{mountpoint:/var/lib/containers/storage/overlay/fb6e7442ac5fe99954b48421a416e9fadb13efc6373ccd3d9ba31eec5deb09a5/merged major:0 minor:408 fsType:overlay blockSize:0} overlay_0-409:{mountpoint:/var/lib/containers/storage/overlay/4b577aa0d46fde5b760c1589d425dbeae60593861af019935020a860dacd7fd0/merged major:0 minor:409 fsType:overlay blockSize:0} overlay_0-412:{mountpoint:/var/lib/containers/storage/overlay/2b0036b128cbade1309e23c7cc74fc22e85dd6b36d981987f59d1b5bbb1e8497/merged major:0 minor:412 fsType:overlay blockSize:0} overlay_0-413:{mountpoint:/var/lib/containers/storage/overlay/d106a80ce037c413e5c2b32251deb6b94af517bc36e40a4e9f87e7a8d8dfd8d6/merged major:0 minor:413 fsType:overlay blockSize:0} overlay_0-415:{mountpoint:/var/lib/containers/storage/overlay/66fcefba4451cde1f6a338167a7aee807709668be01475a7e3c327225a3b2544/merged major:0 minor:415 fsType:overlay blockSize:0} overlay_0-417:{mountpoint:/var/lib/containers/storage/overlay/ae251639872debe144886b5febe481ab0d2d0de6c7d37729442c32d7a1f51d20/merged major:0 minor:417 fsType:overlay blockSize:0} overlay_0-418:{mountpoint:/var/lib/containers/storage/overlay/317c67a5a739b21199c9612cce34cce8c219a7a11d882cda66988f7371f257f2/merged major:0 minor:418 fsType:overlay blockSize:0} overlay_0-420:{mountpoint:/var/lib/containers/storage/overlay/fceae77071c1258e41d90596c81dd2c62ddc239685ce3d48177d0c710d1db173/merged major:0 minor:420 fsType:overlay blockSize:0} overlay_0-427:{mountpoint:/var/lib/containers/storage/overlay/acbb88889c3d48c9b40fbb07749324f837cc19c20d69bfa36253d414a1bb7649/merged major:0 minor:427 fsType:overlay blockSize:0} overlay_0-433:{mountpoint:/var/lib/containers/storage/overlay/cabd7dd871f2e67d583929d60ebec1def4949dec093b4b1109ad72c048b67448/merged major:0 minor:433 fsType:overlay blockSize:0} overlay_0-435:{mountpoint:/var/lib/containers/storage/overlay/74e14ca154d5389c0cc5269f3754a3b2d0243ba20b04e047712a246bf050e217/merged major:0 minor:435 fsType:overlay blockSize:0} overlay_0-444:{mountpoint:/var/lib/containers/storage/overlay/8b197daee51def6b37d078f97be11e9ad52cd6ef593192868a2ea524a2329b36/merged major:0 minor:444 fsType:overlay blockSize:0} overlay_0-451:{mountpoint:/var/lib/containers/storage/overlay/b7ccc921dfaacabb0ca9711d675feb9f3b8ebd21995f60dad3a4ebaf47077d2e/merged major:0 minor:451 fsType:overlay blockSize:0} overlay_0-456:{mountpoint:/var/lib/containers/storage/overlay/79f76410b0233c290adf16c6cf5ac42ebf8eca421cb8b0648d13964ddaaa2adc/merged major:0 minor:456 fsType:overlay blockSize:0} overlay_0-467:{mountpoint:/var/lib/containers/storage/overlay/7596e29a1610345b5d19285da94aa4be3629c27b086b57d1cc3e87bff91269a8/merged major:0 minor:467 fsType:overlay blockSize:0} overlay_0-469:{mountpoint:/var/lib/containers/storage/overlay/de30e2712142b7393b4c608283f52caa731ab822990097585dcf71e3e3fd4623/merged major:0 minor:469 fsType:overlay blockSize:0} overlay_0-471:{mountpoint:/var/lib/containers/storage/overlay/c77ab25fd1f434785876c9a618288aad62f03731dae0b1643b89c3a051820825/merged major:0 minor:471 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/766044baeb2a41c3d6888fac7f07e5323f8fad79825d0fe52bc27cea1e42e1da/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-485:{mountpoint:/var/lib/containers/storage/overlay/ff29e51f6bb5c8a390bb3b5fec77e51b538bfc252cd7a45b5b544ea515a92ba0/merged major:0 minor:485 fsType:overlay blockSize:0} overlay_0-497:{mountpoint:/var/lib/containers/storage/overlay/67be7977f47dc3d21b173eac8d7f61430092130c2c0e675f019baafb5df2bc6b/merged major:0 minor:497 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/36513d08985004e7b2c22d61ecc1fdd8da1d50ed500631d3de72f7229bd98544/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-520:{mountpoint:/var/lib/containers/storage/overlay/a77ff9dab7c8aa6c392cff6254771d77719ff00c9a1a54dcc075928895127bcb/merged major:0 minor:520 fsType:overlay blockSize:0} overlay_0-524:{mountpoint:/var/lib/containers/storage/overlay/199f7a19ca55dca359d8b5b57b7ac04578fa003f0d8cf3c86951f80e45f3933c/merged major:0 minor:524 fsType:overlay blockSize:0} overlay_0-527:{mountpoint:/var/lib/containers/storage/overlay/5c46c4a49b36f032ea738faceb3d93b179ae93a12e420e004d7dfab6b659739f/merged major:0 minor:527 fsType:overlay blockSize:0} overlay_0-531:{mountpoint:/var/lib/containers/storage/overlay/5411ef7b95458de2294cee2ccaa21b9522ab1d08b17daa9c0bd0d432cf24c062/merged major:0 minor:531 fsType:overlay blockSize:0} overlay_0-535:{mountpoint:/var/lib/containers/storage/overlay/f769d3f7c45f6e6965ec7e12be7a581467ce03c3a7c70189a76996ca91c2b82b/merged major:0 minor:535 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/lib/containers/storage/overlay/4ddad568793dbe502953ef910106336647ebe7df4dca502d35422a0c5f81cb3f/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-540:{mountpoint:/var/lib/containers/storage/overlay/91a0615f0c43a82cc5f745f19a043adbabcf82441223b30d599f71e3cc55eb4b/merged major:0 minor:540 fsType:overlay blockSize:0} overlay_0-542:{mountpoint:/var/lib/containers/storage/overlay/087e955d72500dc37056fcbc84078debc7bbe51508524551e2ab879b5626a2a8/merged major:0 minor:542 fsType:overlay blockSize:0} overlay_0-544:{mountpoint:/var/lib/containers/storage/overlay/d8dbaafec3215836701500a5a95266ed1b788e099e588e4b190030dad80c0d18/merged major:0 minor:544 fsType:overlay blockSize:0} overlay_0-549:{mountpoint:/var/lib/containers/storage/overlay/419c40f8352cf862e255481ab5d69a57556e0961af741367c8436647fff68154/merged major:0 minor:549 fsType:overlay blockSize:0} overlay_0-553:{mountpoint:/var/lib/containers/storage/overlay/11e99095828808411269d06f734c86c16e936dca625d18f382478f8dee24a6bc/merged major:0 minor:553 fsType:overlay blockSize:0} overlay_0-555:{mountpoint:/var/lib/containers/storage/overlay/c5ef3c1c4b147a95da236555525737e27328adabd56549dd6bd962c501bda00f/merged major:0 minor:555 fsType:overlay blockSize:0} overlay_0-557:{mountpoint:/var/lib/containers/storage/overlay/691673ae82e4cf926e4ec691bcb350ca14ee70575f95eff3f503506d832810ad/merged major:0 minor:557 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/01f69c8f5e4a67c31420afa375e4df5bab8f09d9c791b88a73694a8e44bd8afa/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-575:{mountpoint:/var/lib/containers/storage/overlay/2bd6713237848bf556652009dbc52f142123a993c8deae53eae0a3f8764efd9f/merged major:0 minor:575 fsType:overlay blockSize:0} overlay_0-577:{mountpoint:/var/lib/containers/storage/overlay/352d181048dee0d37adcdcef4ed372c623c7db2fa5460938d0a6894fee6bb440/merged major:0 minor:577 fsType:overlay blockSize:0} overlay_0-58:{mountpoint:/var/lib/containers/storage/overlay/c9c60a978cae3940290571bedea229bef80627f3a41626655f5271bd9409d91a/merged major:0 minor:58 fsType:overlay blockSize:0} overlay_0-587:{mountpoint:/var/lib/containers/storage/overlay/5903178f1060532eaf9beebd6e97c965c7cd31c4607ac542ed74966da0113cce/merged major:0 minor:587 fsType:overlay blockSize:0} overlay_0-591:{mountpoint:/var/lib/containers/storage/overlay/b3688734d5544ef2a73b953d6692b84c8ef755f7c92ddbc9ce400485d6834769/merged major:0 minor:591 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/029a6a5dc530f17b7b7ffe4a13755ba34b1b1d97fa238e90da5a118ae4196d02/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-602:{mountpoint:/var/lib/containers/storage/overlay/e87c5d0d57972c8ee932c463b449282fb0ad3398924df04313d0c6ce76c1ae2b/merged major:0 minor:602 fsType:overlay blockSize:0} overlay_0-603:{mountpoint:/var/lib/containers/storage/overlay/9584b7a0916cab191eb1f8d56bc01351d324e18a3cd1c91d4c10c815b7d8df13/merged major:0 minor:603 fsType:overlay blockSize:0} overlay_0-607:{mountpoint:/var/lib/containers/storage/overlay/76dd4809d41baed280ff50fb9800d5a966cfe0bd53dee6af84ec0f5297dd23ae/merged major:0 minor:607 fsType:overlay blockSize:0} overlay_0-612:{mountpoint:/var/lib/containers/storage/overlay/7ffb29468e09cd178747ecb3aba23d3d8d23d46b32e0b84e41ecaa0d6f0fef07/merged major:0 minor:612 fsType:overlay blockSize:0} overlay_0-614:{mountpoint:/var/lib/containers/storage/overlay/aed0d17db20458ace5f4977d9f67daf85ad695fd4b6947ab9033b93517d43880/merged major:0 minor:614 fsType:overlay blockSize:0} overlay_0-628:{mountpoint:/var/lib/containers/storage/overlay/756c3b27164dbce7ee469a79f4cb22ef3544ff516e7816df073de487fd3abb0b/merged major:0 minor:628 fsType:overlay blockSize:0} overlay_0-629:{mountpoint:/var/lib/containers/storage/overlay/320291f33d1dbd020bbec6bb045b6708c42af957be5a0417cb5bcde4a485e19e/merged major:0 minor:629 fsType:overlay blockSize:0} overlay_0-636:{mountpoint:/var/lib/containers/storage/overlay/85e2111505dc00123c8c6ca06eda38a1fe8a038c829ab08fa8eac4d95e8488dc/merged major:0 minor:636 fsType:overlay blockSize:0} overlay_0-639:{mountpoint:/var/lib/containers/storage/overlay/d39c4b267dc75334443e31e08ccb8706c14f9336fba856257fa9a470a6d58d0a/merged major:0 minor:639 fsType:overlay blockSize:0} overlay_0-641:{mountpoint:/var/lib/containers/storage/overlay/c9db5a2da76e58ed69330bd4692ba7e364e78f391c15f59cfbab55209f260916/merged major:0 minor:641 fsType:overlay blockSize:0} overlay_0-652:{mountpoint:/var/lib/containers/storage/overlay/fb7867d5dbc266e693102f9f12351de26f5e9b34b9cb665ae7993301e1730e3f/merged major:0 minor:652 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/2b59a805901734aae58c5c9049ac93ad9d7646068b9bb46cc4e645337c441bda/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-669:{mountpoint:/var/lib/containers/storage/overlay/4213e657c95111d8b052a78322dbb5af7c8d3fe4d85cbab8a3fc044e4d8757a8/merged major:0 minor:669 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/52f7a2b56980a68e99c2846054fb551acab805eafa41246e453b5a752bcce018/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-678:{mountpoint:/var/lib/containers/storage/overlay/044ca7af46b2ed897f6cbfa59a037048a4195e48b40d2132773aae7b1f381ed5/merged major:0 minor:678 fsType:overlay blockSize:0} overlay_0-687:{mountpoint:/var/lib/containers/storage/overlay/994d13fba27e37ea38cfd83f403b367bb4ed4eb25dcdb4a363f332d6008ee881/merged major:0 minor:687 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/71a7022437878382c11e3d17f36e8a3f5492c409ab47af159c5bbf501d375e0a/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-695:{mountpoint:/var/lib/containers/storage/overlay/286608f885d003151bc4e96ad0f2f725667760fa902cc03f47419a482ac17d03/merged major:0 minor:695 fsType:overlay blockSize:0} overlay_0-703:{mountpoint:/var/lib/containers/storage/overlay/e04029a627e0cbd7995f09ab8135ee90888a8e1eedf8ac852d00f3cf01efafa3/merged major:0 minor:703 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/e04741fae301904823dd37c1b1bfeecabb7e81296f49c97b3c5f4fc721a73cff/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-715:{mountpoint:/var/lib/containers/storage/overlay/cbca92d56999e3fb75f151f8ec9762a09ff1631134cd0686cda36978c0b6d3b1/merged major:0 minor:715 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/7936a11aa85072ad64db3b251fe3a9837276b9cbbd943b3366348441d1a0c835/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-728:{mountpoint:/var/lib/containers/storage/overlay/cc24bd33808d6ed78d0a9e9cf128eae733e880b3456b552a8c22ee42e1abfe16/merged major:0 minor:728 fsType:overlay blockSize:0} overlay_0-732:{mountpoint:/var/lib/containers/storage/overlay/9daff4ba6dd0541b3528631c604f108b9bf768980ff340da34d0c84fa97a9d1b/merged major:0 minor:732 fsType:overlay blockSize:0} overlay_0-736:{mountpoint:/var/lib/containers/storage/overlay/7a7d3b8aea689d6f94f43a3dcaf10b25a0c37cdb8a30bf87b5a5e5d8a3e92c5f/merged major:0 minor:736 fsType:overlay blockSize:0} overlay_0-740:{mountpoint:/var/lib/containers/storage/overlay/fbc4130e4e3b7c58fdbaa9021a5589504351464878aca0744e4e2536fa4278bc/merged major:0 minor:740 fsType:overlay blockSize:0} overlay_0-742:{mountpoint:/var/lib/containers/storage/overlay/91a9890f05a2ba71ddb72c13f447e047eb9219fde2790e14997025a1933c257b/merged major:0 minor:742 fsType:overlay blockSize:0} overlay_0-744:{mountpoint:/var/lib/containers/storage/overlay/0cae8acf23b1a5dbb81ec7eb8fba8ef604b393256aeaed020dd29f8a9b0edcf9/merged major:0 minor:744 fsType:overlay blockSize:0} overlay_0-768:{mountpoint:/var/lib/containers/storage/overlay/e250b7c40235b529adeee00aad26fbf7f3c2f18cb90ad89a9baf22aeaff89c8d/merged major:0 minor:768 fsType:overlay blockSize:0} overlay_0-785:{mountpoint:/var/lib/containers/storage/overlay/ef10209af4280a7d76d37a2fb67c519bff900ffba2de5d3002bbc72786f3cb43/merged major:0 minor:785 fsType:overlay blockSize:0} overlay_0-787:{mountpoint:/var/lib/containers/storage/overlay/a78a6b79c3c2d0ffa599ce3518940acb954b93eac95b4a7d2efbc45a764ac236/merged major:0 minor:787 fsType:overlay blockSize:0} overlay_0-789:{mountpoint:/var/lib/containers/storage/overlay/8adced7175dab541376b3ca45fa4269901a79b7f0dc4bf03f0370beda2d1bd7e/merged major:0 minor:789 fsType:overlay blockSize:0} overlay_0-796:{mountpoint:/var/lib/containers/storage/overlay/a88ebd7ec0f283f8ecbca3153b36c419b4c494333a2d97dd90c8a9df84c3b3cd/merged major:0 minor:796 fsType:overlay blockSize:0} overlay_0-803:{mountpoint:/var/lib/containers/storage/overlay/1ade720c08011d805f3b3fd08aa0939f735daabb8fabc06d4a3752b81d804f12/merged major:0 minor:803 fsType:overlay blockSize:0} overlay_0-808:{mountpoint:/var/lib/containers/storage/overlay/c700817bc1a6e04f847c653d8f015e212dec464ba496e4343100413d6cdfa09e/merged major:0 minor:808 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/2bec793ef9306cec945c4641732e0adf0d7e2256bc56d077bbe04cb3b5e4841c/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-812:{mountpoint:/var/lib/containers/storage/overlay/7ec1241fd1d6e8fea9eb43f1dc6447d279a4c80008a61a124f8b63124fa23918/merged major:0 minor:812 fsType:overlay blockSize:0} overlay_0-817:{mo Mar 13 10:56:56.621132 master-0 kubenswrapper[33013]: untpoint:/var/lib/containers/storage/overlay/c4c5825fbe934d2ada97f947b780b86d99da8c9ce3e1a342028fa7a7475d1fd8/merged major:0 minor:817 fsType:overlay blockSize:0} overlay_0-819:{mountpoint:/var/lib/containers/storage/overlay/48f952e0182d92078c6263a59ecc51c503ad1c08689ccea4a2b00e916758f385/merged major:0 minor:819 fsType:overlay blockSize:0} overlay_0-821:{mountpoint:/var/lib/containers/storage/overlay/0a1943d7f5c4e22663fcb0c0409b3ffdabb2803601d0ff19729ba5f27a114482/merged major:0 minor:821 fsType:overlay blockSize:0} overlay_0-824:{mountpoint:/var/lib/containers/storage/overlay/157caa6c0f735da754d6f56975241663c9c2dcae250caf178113d47cc6ee25d8/merged major:0 minor:824 fsType:overlay blockSize:0} overlay_0-826:{mountpoint:/var/lib/containers/storage/overlay/09340a492eb994fa86f8bd530f1e33f881c3f75db8c673c478dcbf3b5202e521/merged major:0 minor:826 fsType:overlay blockSize:0} overlay_0-848:{mountpoint:/var/lib/containers/storage/overlay/fd2d3c22cced651b61e65ca2ebf04c1edac65b571353cbe781da70c1e0d092e7/merged major:0 minor:848 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/8fea27f4eb993e7c1244a7c4b1605ab8daf83f0d8a25dcffc5e2cc6ea8f46e53/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-853:{mountpoint:/var/lib/containers/storage/overlay/686edc4f21c49e9af2c8245cbe2365dbeee73be3edc95df919f53b41fb6b1388/merged major:0 minor:853 fsType:overlay blockSize:0} overlay_0-855:{mountpoint:/var/lib/containers/storage/overlay/8bb06a17151e9dc00423821ab7c26d47e9128df2a1299497cf9e0a85e5b05335/merged major:0 minor:855 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/85f0f70bb822b33f6d77a775ac0506f08776cd42afd76081d8f8c7d09d0401d1/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-860:{mountpoint:/var/lib/containers/storage/overlay/85d21fe1e05ac432616cdd45cfd26658343e14f12d8459da73df06daffc5ce52/merged major:0 minor:860 fsType:overlay blockSize:0} overlay_0-87:{mountpoint:/var/lib/containers/storage/overlay/60588ece84296f0e10bb84c6dedadcf563711799da463d198c09a02f54e743e7/merged major:0 minor:87 fsType:overlay blockSize:0} overlay_0-871:{mountpoint:/var/lib/containers/storage/overlay/ed290275a7d1d836a2acf89bd495adf3cee9c5376b4856dcf518c65b91e07efc/merged major:0 minor:871 fsType:overlay blockSize:0} overlay_0-873:{mountpoint:/var/lib/containers/storage/overlay/3c34f6da26e4f887c35c8ca7d08940298ff042239ad9915e3789286390542532/merged major:0 minor:873 fsType:overlay blockSize:0} overlay_0-878:{mountpoint:/var/lib/containers/storage/overlay/bb055c0e107efd56cf0b35d8bdccea701f277bc916c424fabd84fca12e7d02f7/merged major:0 minor:878 fsType:overlay blockSize:0} overlay_0-880:{mountpoint:/var/lib/containers/storage/overlay/65f0c669b7f103d1dde5f2de7ce0dc876a2d8abdd5504e8eb962817833a7297a/merged major:0 minor:880 fsType:overlay blockSize:0} overlay_0-881:{mountpoint:/var/lib/containers/storage/overlay/f735f96bee72f91e7d3170b97a4c905eba614aea8ffe6d9be9e559fddc2dbc03/merged major:0 minor:881 fsType:overlay blockSize:0} overlay_0-883:{mountpoint:/var/lib/containers/storage/overlay/3621f93b473f1593988d1e62ed874b1b89e0ef2017d88e5fc6a8fcaeb893f86c/merged major:0 minor:883 fsType:overlay blockSize:0} overlay_0-891:{mountpoint:/var/lib/containers/storage/overlay/d4be97dd5a663c3fff846fc9badfebc344adb388aa0e831a13d975cc9c046cab/merged major:0 minor:891 fsType:overlay blockSize:0} overlay_0-892:{mountpoint:/var/lib/containers/storage/overlay/c7a43d884c854c9a173b0f93cd49a6645b1e0cc9d453500785a94609beb611a5/merged major:0 minor:892 fsType:overlay blockSize:0} overlay_0-897:{mountpoint:/var/lib/containers/storage/overlay/46ad9d56fd829db06fdc55e98993e69ee04f582066341acda497ce12888dfc01/merged major:0 minor:897 fsType:overlay blockSize:0} overlay_0-900:{mountpoint:/var/lib/containers/storage/overlay/f7419dba64de08f89c69732d196929b84be6c6187babad4e2a6f32b82b2f0b80/merged major:0 minor:900 fsType:overlay blockSize:0} overlay_0-901:{mountpoint:/var/lib/containers/storage/overlay/11cd5aaa81643ed1d9d3cff9c8e4fdbfae5b07aafac056b5e51dfaa9eaf6e11e/merged major:0 minor:901 fsType:overlay blockSize:0} overlay_0-910:{mountpoint:/var/lib/containers/storage/overlay/5d5e7452e9c32fcd2411739fc320aea118f7a8cae57fc155b8949a852226db42/merged major:0 minor:910 fsType:overlay blockSize:0} overlay_0-916:{mountpoint:/var/lib/containers/storage/overlay/d304fd7ae4ff8aa388637d03bdee93fe3df757f600275add7364f53d3ef42e98/merged major:0 minor:916 fsType:overlay blockSize:0} overlay_0-928:{mountpoint:/var/lib/containers/storage/overlay/193dd248584f7168318317576abeeb34bd69131ada5acf40d528d04b9bc4bc65/merged major:0 minor:928 fsType:overlay blockSize:0} overlay_0-930:{mountpoint:/var/lib/containers/storage/overlay/cd9a58280882fbcfe2e6200478ec6c9bf005a184ea34c63859a9135fc5fe93be/merged major:0 minor:930 fsType:overlay blockSize:0} overlay_0-932:{mountpoint:/var/lib/containers/storage/overlay/fedeb7a0d51f4a036eea9b72a9b2f49da00e49ca88d4bfb8836e061a60577af0/merged major:0 minor:932 fsType:overlay blockSize:0} overlay_0-942:{mountpoint:/var/lib/containers/storage/overlay/2765530d980f40877e78d3706dcafbe67f375c524b4ea05eecc65de94f7424d6/merged major:0 minor:942 fsType:overlay blockSize:0} overlay_0-95:{mountpoint:/var/lib/containers/storage/overlay/20096b9dcda8120cdcf8a400ff5d09945862e82993e7ca45031eb2383a2c164c/merged major:0 minor:95 fsType:overlay blockSize:0} overlay_0-955:{mountpoint:/var/lib/containers/storage/overlay/7b0a42c91c44f05f4398bb3b2b1a510867dc23c81da4932572033b7a80264e28/merged major:0 minor:955 fsType:overlay blockSize:0} overlay_0-977:{mountpoint:/var/lib/containers/storage/overlay/cbdd0ddc2d7cba66ef04004f969a860fbd73e7b209a27f7306f3127aa19adc99/merged major:0 minor:977 fsType:overlay blockSize:0} overlay_0-981:{mountpoint:/var/lib/containers/storage/overlay/35e868489ca6080de65b2cc9bee271e9a6881dd1168e6680a9e6b7f09b200e0c/merged major:0 minor:981 fsType:overlay blockSize:0} overlay_0-982:{mountpoint:/var/lib/containers/storage/overlay/bfeff8752b39d1a3d4cd73160c15d4653364fabe278fa1daf7eedeabb383b2ae/merged major:0 minor:982 fsType:overlay blockSize:0}] Mar 13 10:56:56.656226 master-0 kubenswrapper[33013]: I0313 10:56:56.655016 33013 manager.go:217] Machine: {Timestamp:2026-03-13 10:56:56.654188027 +0000 UTC m=+0.130141396 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:0b3c13f41020471d8d074d77a948365d SystemUUID:0b3c13f4-1020-471d-8d07-4d77a948365d BootID:8a9973c8-4daa-47e3-857d-01825c17d4bc Filesystems:[{Device:overlay_0-891 DeviceMajor:0 DeviceMinor:891 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-531 DeviceMajor:0 DeviceMinor:531 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/164736e7418a21cde804e102fe3d184a2797171e5f4bf83a8bf76c7c9b72cc41/userdata/shm DeviceMajor:0 DeviceMinor:758 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fcd78f90ad99c247dece0b85a206d1ac457560cacc8ddad5d00adc32257026d1/userdata/shm DeviceMajor:0 DeviceMinor:750 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b307d791edfe64a9b684cb84f780359a81088e7f734461ffa9d77ba51707349a/userdata/shm DeviceMajor:0 DeviceMinor:800 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f87662b9-6ac6-44f3-8a16-ff858c2baa91/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:139 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2afe3890-e844-4dd3-ba49-3ac9178549bf/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:98 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-732 DeviceMajor:0 DeviceMinor:732 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6ed47c57-533f-43e4-88eb-07da29b4878f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-408 DeviceMajor:0 DeviceMinor:408 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/42b4d53c-af72-44c8-9605-271445f95f87/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:455 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d9075a44-22d3-4562-819e-d5a92f013663/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:536 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7cf1f1393ed4dc75d53053e58fde65a2d67118e8d37c0361a92ae7802d8b760d/userdata/shm DeviceMajor:0 DeviceMinor:701 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1187 DeviceMajor:0 DeviceMinor:1187 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0703b273c4e03bb7b56beaec08bcd6e173eecb63d506d0b227eed01f4963105d/userdata/shm DeviceMajor:0 DeviceMinor:369 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-555 DeviceMajor:0 DeviceMinor:555 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1ee97873740b9b10b1888585dd4cf251d4592642ab8be20585d1c34abd206ca4/userdata/shm DeviceMajor:0 DeviceMinor:763 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volumes/kubernetes.io~projected/kube-api-access-vxvqn DeviceMajor:0 DeviceMinor:152 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/981440e84066752679558f1f2c3a39bee9a4847d1a094c571e2638b8a5f2290d/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-415 DeviceMajor:0 DeviceMinor:415 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-607 DeviceMajor:0 DeviceMinor:607 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/beee81ef-5a3a-4df2-85d5-2573679d261f/volumes/kubernetes.io~projected/kube-api-access-f8q5s DeviceMajor:0 DeviceMinor:731 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/824d2e06fb02d75eff387f4090fa04e983a89eabed59a10155690e2b0750ea37/userdata/shm DeviceMajor:0 DeviceMinor:869 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-900 DeviceMajor:0 DeviceMinor:900 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/76246e9a1d2379cb0958975bb664cf21b612b44d022ee860fbd36d45bdea98e3/userdata/shm DeviceMajor:0 DeviceMinor:1006 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1024 DeviceMajor:0 DeviceMinor:1024 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-433 DeviceMajor:0 DeviceMinor:433 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1434c4a2-5c4d-478a-a16a-7d6a52ea3099/volumes/kubernetes.io~projected/kube-api-access-qqjkf DeviceMajor:0 DeviceMinor:225 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8f9db15a-8854-485b-9863-9cbe5dddd977/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:235 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b12e76f4-b960-4534-90e6-a2cdbecd1728/volumes/kubernetes.io~projected/kube-api-access-xq9dl DeviceMajor:0 DeviceMinor:261 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/484e6d0b-d057-4658-8e49-bbe7e6f6ee86/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:734 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc/volumes/kubernetes.io~projected/kube-api-access-f5656 DeviceMajor:0 DeviceMinor:586 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-881 DeviceMajor:0 DeviceMinor:881 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/549bd192-0235-4994-b485-f1b10d16f6b5/volumes/kubernetes.io~projected/kube-api-access-pwqp6 DeviceMajor:0 DeviceMinor:426 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/070b85a0-f076-4750-aa00-dabba401dc75/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:756 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-444 DeviceMajor:0 DeviceMinor:444 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d6da84d85f2972436dd4f3787492391fdf9d5e5e6bdc8d3e5f13761666cdfd3b/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0ac1a605-d2d5-4004-96f5-121c20555bde/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:730 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:774 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/volumes/kubernetes.io~projected/kube-api-access-lpdlr DeviceMajor:0 DeviceMinor:221 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/257a4a8b-014c-4473-80a0-e95cf6d41bf1/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:526 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/60e17cd1-c520-4d8d-8c72-47bf73b8cc66/volumes/kubernetes.io~projected/kube-api-access-xg9zz DeviceMajor:0 DeviceMinor:868 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-928 DeviceMajor:0 DeviceMinor:928 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec3168fc-6c8f-4603-94e0-17b1ae22a802/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:245 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-687 DeviceMajor:0 DeviceMinor:687 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-873 DeviceMajor:0 DeviceMinor:873 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/29733a7ea73e3174735d72b2210dd940a71dd3f008e394e00385294d9ba36ee3/userdata/shm DeviceMajor:0 DeviceMinor:1036 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-652 DeviceMajor:0 DeviceMinor:652 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-412 DeviceMajor:0 DeviceMinor:412 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-853 DeviceMajor:0 DeviceMinor:853 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5448b59a-b731-45a3-9ded-d25315f597fb/volumes/kubernetes.io~projected/kube-api-access-d8kvd DeviceMajor:0 DeviceMinor:1065 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/03e6a6324c34d7bf4b86e7eced1bfea7054e77f627892ff596f0fda33c1d39e2/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~projected/kube-api-access-m5vcv DeviceMajor:0 DeviceMinor:230 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/324185d8aba3ef3e122592b3ddf0fb321d8d4d7598b9bfc330b8735d340f3d78/userdata/shm DeviceMajor:0 DeviceMinor:1072 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-286 DeviceMajor:0 DeviceMinor:286 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/40709622bc83dd44130ec2874b3fecd53ec9c74c9ec5ea39d2f7a0dcddaf6a5c/userdata/shm DeviceMajor:0 DeviceMinor:375 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-736 DeviceMajor:0 DeviceMinor:736 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a305f45-8689-45a8-8c8b-5954f2c863df/volumes/kubernetes.io~projected/kube-api-access-zp6pp DeviceMajor:0 DeviceMinor:223 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-612 DeviceMajor:0 DeviceMinor:612 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b10584c2-ef04-4649-bcb6-9222c9530c3f/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:484 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33/volumes/kubernetes.io~projected/kube-api-access-bt7hs DeviceMajor:0 DeviceMinor:777 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-173 DeviceMajor:0 DeviceMinor:173 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1180 DeviceMajor:0 DeviceMinor:1180 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3ff2ab1c-7057-4e18-8e32-68807f86532a/volumes/kubernetes.io~projected/kube-api-access-8c4rc DeviceMajor:0 DeviceMinor:231 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9c60be2c4f1a0c68603912b47e268ac1ef0712a8ed512ee20014ad96ccd12d01/userdata/shm DeviceMajor:0 DeviceMinor:529 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/48f99840-4d9e-49c5-819e-0bb15493feb5/volumes/kubernetes.io~projected/kube-api-access-mb5l4 DeviceMajor:0 DeviceMinor:829 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-603 DeviceMajor:0 DeviceMinor:603 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d5d5f29010412c6336405d3c5516283cb7d7f5b2df47504d4448651a9a52ed98/userdata/shm DeviceMajor:0 DeviceMinor:329 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1141 DeviceMajor:0 DeviceMinor:1141 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1203 DeviceMajor:0 DeviceMinor:1203 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d0f42a72-24c7-49e6-8edb-97b2b0d6183a/volumes/kubernetes.io~projected/kube-api-access-26dtr DeviceMajor:0 DeviceMinor:820 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-379 DeviceMajor:0 DeviceMinor:379 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-266 DeviceMajor:0 DeviceMinor:266 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4d5479f3-51ec-4b93-8188-21cdda44828d/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:437 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-728 DeviceMajor:0 DeviceMinor:728 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9da11462-a91d-4d02-8614-78b4c5b2f7e2/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:776 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-855 DeviceMajor:0 DeviceMinor:855 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6724c795aeefb2de7ccb8edf6dd545a4648253bccf79de04ddb0f389fe53a8e7/userdata/shm DeviceMajor:0 DeviceMinor:1074 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-497 DeviceMajor:0 DeviceMinor:497 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/326eff89f13ef648d073ebec6b104b4118323f6c130c4cf1f4122764a419957e/userdata/shm DeviceMajor:0 DeviceMinor:762 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/8f9db15a-8854-485b-9863-9cbe5dddd977/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/803de28e-3b31-4ea2-9b97-87a733635a5c/volumes/kubernetes.io~projected/kube-api-access-gchrx DeviceMajor:0 DeviceMinor:307 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5aa507cf-017d-44f5-8662-77547f82fb51/volumes/kubernetes.io~projected/kube-api-access-jt6sd DeviceMajor:0 DeviceMinor:719 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-955 DeviceMajor:0 DeviceMinor:955 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/72d184f62fa595f3a7463191ce616e1db275cdc732a1ab006b74065651d152d4/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/257a4a8b-014c-4473-80a0-e95cf6d41bf1/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:480 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1d72d950-cfb4-4ed5-9ad6-f7266b937493/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:476 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-628 DeviceMajor:0 DeviceMinor:628 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b68ed803-45e2-42f1-99b1-33cf59b01d74/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1123 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1edde4bf-4554-4ab2-b588-513ad84a9bae/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:801 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9d8af021-f20f-48a2-8b2a-3a5a3f37237f/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:773 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6ae68534d60ba95b9a0cc4c1bb4a76e1716cb7e67493f9e7d66360f5bc7a13b3/userdata/shm DeviceMajor:0 DeviceMinor:247 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d32de03a3a8ddd97f3e65197fb212f2ef727ae3a334417e63b4520866c016ec6/userdata/shm DeviceMajor:0 DeviceMinor:280 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-520 DeviceMajor:0 DeviceMinor:520 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1099 DeviceMajor:0 DeviceMinor:1099 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~secret/secret-telemeter-client DeviceMajor:0 DeviceMinor:608 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/34c705593dd577219134e52fa5f1f4ac1bf3a254e75ac17359d23f2432c84086/userdata/shm DeviceMajor:0 DeviceMinor:341 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1e94f5c752f1fded64a3ee340fb34a998ddce5e3acb0a9a9e83f157fbccc7394/userdata/shm DeviceMajor:0 DeviceMinor:153 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-540 DeviceMajor:0 DeviceMinor:540 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d4855d0948cb05692ea183985279dcd65ea773074cc3b3ff4694481fd014efe8/userdata/shm DeviceMajor:0 DeviceMinor:428 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3ff2ab1c-7057-4e18-8e32-68807f86532a/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:454 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16/volumes/kubernetes.io~projected/kube-api-access-rvvhh DeviceMajor:0 DeviceMinor:720 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/42b4d53c-af72-44c8-9605-271445f95f87/volumes/kubernetes.io~projected/kube-api-access-kjcjm DeviceMajor:0 DeviceMinor:228 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ec3168fc-6c8f-4603-94e0-17b1ae22a802/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-787 DeviceMajor:0 DeviceMinor:787 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1076 DeviceMajor:0 DeviceMinor:1076 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-363 DeviceMajor:0 DeviceMinor:363 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-435 DeviceMajor:0 DeviceMinor:435 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:442 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c87545aa-11c2-4e6e-8c13-16eeff3be83b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:775 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-366 DeviceMajor:0 DeviceMinor:366 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-768 DeviceMajor:0 DeviceMinor:768 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-910 DeviceMajor:0 DeviceMinor:910 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-451 DeviceMajor:0 DeviceMinor:451 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f00c30651131dc152cafde2dc4f58ee3081e7ee0af524ba7783523529e49fba/userdata/shm DeviceMajor:0 DeviceMinor:264 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5b796628-a6ca-4d5c-9870-0ca60b9372aa/volumes/kubernetes.io~projected/kube-api-access-48nns DeviceMajor:0 DeviceMinor:1070 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/48f99840-4d9e-49c5-819e-0bb15493feb5/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:828 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4df756f0-c6b6-4730-842a-7ee9227397ae/volumes/kubernetes.io~projected/kube-api-access-8k4c5 DeviceMajor:0 DeviceMinor:1035 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1097 DeviceMajor:0 DeviceMinor:1097 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f87662b9-6ac6-44f3-8a16-ff858c2baa91/volumes/kubernetes.io~projected/kube-api-access-zk4sg DeviceMajor:0 DeviceMinor:138 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-401 DeviceMajor:0 DeviceMinor:401 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/549bd192-0235-4994-b485-f1b10d16f6b5/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:425 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-880 DeviceMajor:0 DeviceMinor:880 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1003 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cc41129be016cbf901b0a2cc5f025302b5359f5df446ef56a8371863c53e45e5/userdata/shm DeviceMajor:0 DeviceMinor:650 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:63 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-262 DeviceMajor:0 DeviceMinor:262 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-932 DeviceMajor:0 DeviceMinor:932 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b68ed803-45e2-42f1-99b1-33cf59b01d74/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1128 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5b796628-a6ca-4d5c-9870-0ca60b9372aa/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1066 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-892 DeviceMajor:0 DeviceMinor:892 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9da11462-a91d-4d02-8614-78b4c5b2f7e2/volumes/kubernetes.io~projected/kube-api-access-hp847 DeviceMajor:0 DeviceMinor:780 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1edde4bf-4554-4ab2-b588-513ad84a9bae/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:798 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-602 DeviceMajor:0 DeviceMinor:602 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1c12a5d5-711f-4663-974c-c4b06e15fc39/volumes/kubernetes.io~projected/kube-api-access-cg69z DeviceMajor:0 DeviceMinor:125 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-272 DeviceMajor:0 DeviceMinor:272 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-376 DeviceMajor:0 DeviceMinor:376 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a63b48f34aee10c2a5c7b02c8aaeb3e69aba820b8bbd971ae28a10945e9803c8/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-485 DeviceMajor:0 DeviceMinor:485 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-629 DeviceMajor:0 DeviceMinor:629 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1183 DeviceMajor:0 DeviceMinor:1183 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-535 DeviceMajor:0 DeviceMinor:535 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:453 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/13a004f2f44b204dd23b4531ea2ef3d4457cfe84fd8fdc544d2f9015f5747d61/userdata/shm DeviceMajor:0 DeviceMinor:457 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e485e709-32ba-442b-98e5-b4073516c0ab/volumes/kubernetes.io~projected/kube-api-access-qwc4l DeviceMajor:0 DeviceMinor:543 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/05a72a4c-5ce8-49d1-8e4f-334f63d4e987/volumes/kubernetes.io~projected/kube-api-access-btws6 DeviceMajor:0 DeviceMinor:563 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:227 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1edde4bf-4554-4ab2-b588-513ad84a9bae/volumes/kubernetes.io~projected/kube-api-access-kxkl8 DeviceMajor:0 DeviceMinor:802 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-785 DeviceMajor:0 DeviceMinor:785 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-897 DeviceMajor:0 DeviceMinor:897 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d72d950-cfb4-4ed5-9ad6-f7266b937493/volumes/kubernetes.io~projected/kube-api-access-h9cbm DeviceMajor:0 DeviceMinor:479 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d9075a44-22d3-4562-819e-d5a92f013663/volumes/kubernetes.io~projected/kube-api-access-htqw9 DeviceMajor:0 DeviceMinor:539 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-715 DeviceMajor:0 DeviceMinor:715 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/070b85a0-f076-4750-aa00-dabba401dc75/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:755 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-808 DeviceMajor:0 DeviceMinor:808 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3b9f539be02f519c82f90f79644538b0615d221de57b1fd6c7c4726d8ebe602e/userdata/shm DeviceMajor:0 DeviceMinor:89 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-168 DeviceMajor:0 DeviceMinor:168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c09f42db-e6d7-469d-9761-88a879f6aa6b/volumes/kubernetes.io~projected/kube-api-access-mcb99 DeviceMajor:0 DeviceMinor:585 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6622be09-206e-4d02-90ca-6d9f2fc852aa/volumes/kubernetes.io~projected/kube-api-access-lqg6g DeviceMajor:0 DeviceMinor:374 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/11927952-723f-4d6d-922b-73139abe8877/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:573 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/09802d7d0a05bccad87d5ddf8cff0a47cdae0568f0f82013285bb0d1dc8f5424/userdata/shm DeviceMajor:0 DeviceMinor:783 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-87 DeviceMajor:0 DeviceMinor:87 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2c906939264631f5617f60445cdb650e10cc3bf3d0cf16dc4b104f010debfbc1/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/40bc8729edbc545950cfd4248f291a2938cf20232c66e767905dda5ad583859c/userdata/shm DeviceMajor:0 DeviceMinor:238 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b8d40b37-0f3d-4531-9fa8-eda965d2337d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1191 DeviceMajor:0 DeviceMinor:1191 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3a76099d084f4d1745cd0462dfdd6adb21bbcc918adfa4a88776287e0186cf5c/userdata/shm DeviceMajor:0 DeviceMinor:491 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-826 DeviceMajor:0 DeviceMinor:826 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2afe3890-e844-4dd3-ba49-3ac9178549bf/volumes/kubernetes.io~projected/kube-api-access-d84xk DeviceMajor:0 DeviceMinor:255 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-420 DeviceMajor:0 DeviceMinor:420 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~projected/kube-api-access-qd2mn DeviceMajor:0 DeviceMinor:258 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c9ad49e483c47605d1fda52e7e670f8d0dd2ee6b7b1f41e2ebd66c5396af192f/userdata/shm DeviceMajor:0 DeviceMinor:738 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~secret/federate-client-tls DeviceMajor:0 DeviceMinor:686 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1222 DeviceMajor:0 DeviceMinor:1222 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-456 DeviceMajor:0 DeviceMinor:456 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/866cf034-8fd8-4f16-8e9b-68627228aa8d/volumes/kubernetes.io~projected/kube-api-access-mnrlx DeviceMajor:0 DeviceMinor:226 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/37b2e803-302b-4650-b18f-d3d2dd703bd5/volumes/kubernetes.io~projected/kube-api-access-hp2qn DeviceMajor:0 DeviceMinor:248 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-278 DeviceMajor:0 DeviceMinor:278 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/42b4d53c-af72-44c8-9605-271445f95f87/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:448 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4e6ecc16-19cb-4b66-801f-b958b10d0ce7/volumes/kubernetes.io~projected/kube-api-access-gn8w5 DeviceMajor:0 DeviceMinor:749 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3ae050b82d12b25aa641b68f4f3b48f026796f7cf0455a63a6e8d7c183a407db/userdata/shm DeviceMajor:0 DeviceMinor:806 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5ed5e77b-948b-4d94-ac9f-440ee3c07e18/volumes/kubernetes.io~projected/kube-api-access-22bwx DeviceMajor:0 DeviceMinor:234 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4d051c8ad32b7669f426e6d80e6632cee3e398cb08f827d5c2ff51c92ed352a3/userdata/shm DeviceMajor:0 DeviceMinor:886 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/571b031f77274fed6328f6e07d585dcfd8fd69050ed40ecc9a578fd8f3044381/userdata/shm DeviceMajor:0 DeviceMinor:326 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1014 DeviceMajor:0 DeviceMinor:1014 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1069 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1103 DeviceMajor:0 DeviceMinor:1103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f2d2633be257a3a88aea0a33608fb20bbf5d7f015d883bb5bf430f64888b7d47/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1132 DeviceMajor:0 DeviceMinor:1132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-58 DeviceMajor:0 DeviceMinor:58 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-471 DeviceMajor:0 DeviceMinor:471 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/21ea23db5a94394fed39e6756a1919898e68c50238c79a5641bf3126f4447416/userdata/shm DeviceMajor:0 DeviceMinor:342 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4df756f0-c6b6-4730-842a-7ee9227397ae/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1026 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1134 DeviceMajor:0 DeviceMinor:1134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c87545aa-11c2-4e6e-8c13-16eeff3be83b/volumes/kubernetes.io~projected/kube-api-access-pwfzq DeviceMajor:0 DeviceMinor:778 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9d8af021-f20f-48a2-8b2a-3a5a3f37237f/volumes/kubernetes.io~projected/kube-api-access-dzxzs DeviceMajor:0 DeviceMinor:782 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/66f49a19-0e3b-4611-b8a6-5f5687fa20b6/volumes/kubernetes.io~projected/kube-api-access-knkb7 DeviceMajor:0 DeviceMinor:243 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e1e2b9079d6118c595611f9ff8bcc0950650ff2a128d6d9e608f418bd87daef1/userdata/shm DeviceMajor:0 DeviceMinor:810 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1100 DeviceMajor:0 DeviceMinor:1100 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b68ed803-45e2-42f1-99b1-33cf59b01d74/volumes/kubernetes.io~projected/kube-api-access-q5hq9 DeviceMajor:0 DeviceMinor:1129 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c09f42db-e6d7-469d-9761-88a879f6aa6b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:1172 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1084 DeviceMajor:0 DeviceMinor:1084 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-349 DeviceMajor:0 DeviceMinor:349 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-300 DeviceMajor:0 DeviceMinor:300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/11927952-723f-4d6d-922b-73139abe8877/volumes/kubernetes.io~projected/kube-api-access-kgb25 DeviceMajor:0 DeviceMinor:571 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/070b85a0-f076-4750-aa00-dabba401dc75/volumes/kubernetes.io~projected/kube-api-access-nlmhn DeviceMajor:0 DeviceMinor:760 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2195f7be-b41e-4ae2-b737-d5782e0d41a8/volumes/kubernetes.io~projected/kube-api-access-r657p DeviceMajor:0 DeviceMinor:1001 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-388 DeviceMajor:0 DeviceMinor:388 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1156 DeviceMajor:0 DeviceMinor:1156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-982 DeviceMajor:0 DeviceMinor:982 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5b796628-a6ca-4d5c-9870-0ca60b9372aa/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1067 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-384 DeviceMajor:0 DeviceMinor:384 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-418 DeviceMajor:0 DeviceMinor:418 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-553 DeviceMajor:0 DeviceMinor:553 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-575 DeviceMajor:0 DeviceMinor:575 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-744 DeviceMajor:0 DeviceMinor:744 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/volumes/kubernetes.io~projected/kube-api-access-8rfpp DeviceMajor:0 DeviceMinor:99 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b68ed803-45e2-42f1-99b1-33cf59b01d74/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1127 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1163 DeviceMajor:0 DeviceMinor:1163 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2a05e72d-836f-40e0-8a5c-ee02dce494b3/volumes/kubernetes.io~projected/kube-api-access-qdb2x DeviceMajor:0 DeviceMinor:633 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/86774fd7-7c26-4b41-badb-de1004397637/volumes/kubernetes.io~projected/kube-api-access-tfxm5 DeviceMajor:0 DeviceMinor:757 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1209 DeviceMajor:0 DeviceMinor:1209 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/05e1407c7b27a4b6e8d757f9a77812ff8adcb8afeba6392964446e6020251829/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-318 DeviceMajor:0 DeviceMinor:318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee/volumes/kubernetes.io~projected/kube-api-access-s2znn DeviceMajor:0 DeviceMinor:766 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1049 DeviceMajor:0 DeviceMinor:1049 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~projected/kube-api-access-grplv DeviceMajor:0 DeviceMinor:222 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-284 DeviceMajor:0 DeviceMinor:284 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-591 DeviceMajor:0 DeviceMinor:591 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-639 DeviceMajor:0 DeviceMinor:639 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f4e5ae9525fe60361341f421bac6c1976c0d8c217394b9ae9ea8bc8043db8345/userdata/shm DeviceMajor:0 DeviceMinor:1010 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d9075a44-22d3-4562-819e-d5a92f013663/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:537 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2baa20e270e178f3e40e4ef86226c93b0ff3020bf6dac2cb5d4f63eecde92557/userdata/shm DeviceMajor:0 DeviceMinor:1199 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6/volumes/kubernetes.io~projected/kube-api-access-m7v6s DeviceMajor:0 DeviceMinor:747 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1211 DeviceMajor:0 DeviceMinor:1211 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-817 DeviceMajor:0 DeviceMinor:817 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5448b59a-b731-45a3-9ded-d25315f597fb/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1060 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-740 DeviceMajor:0 DeviceMinor:740 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1008 DeviceMajor:0 DeviceMinor:1008 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1c12a5d5-711f-4663-974c-c4b06e15fc39/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-789 DeviceMajor:0 DeviceMinor:789 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/823dd75fa90067312411e552ed573617320e3f633eba91399bcfb19342dfaab8/userdata/shm DeviceMajor:0 DeviceMinor:1130 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:129 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cfcbc3062b54d8acdba8fb18315546ff9d2740da776054d7f9430b71a5238353/userdata/shm DeviceMajor:0 DeviceMinor:251 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/48e8b214299b9f4db7879f744943297a290919968a8d4c7d50b6a78a9ada043a/userdata/shm DeviceMajor:0 DeviceMinor:488 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/52b02d0e4a7c2c479465f8242f01da199717910ea7b898fbcda40528a83b169e/userdata/shm DeviceMajor:0 DeviceMinor:786 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-848 DeviceMajor:0 DeviceMinor:848 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9aa4b44d-f202-4670-afab-44b38960026f/volumes/kubernetes.io~projected/kube-api-access-bjvtr DeviceMajor:0 DeviceMinor:105 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a9a8fe1cfe7b05b245d74255fe06ada29f995a6a0f2d268f7c34ca85219321b3/userdata/shm DeviceMajor:0 DeviceMinor:303 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-669 DeviceMajor:0 DeviceMinor:669 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8b939255ebac1f66f189aaed584b6e7c61496fc54de0eca1dee70e7efa443532/userdata/shm DeviceMajor:0 DeviceMinor:726 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-469 DeviceMajor:0 DeviceMinor:469 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b8d40b37-0f3d-4531-9fa8-eda965d2337d/volumes/kubernetes.io~projected/kube-api-access-l5rht DeviceMajor:0 DeviceMinor:232 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d56ffe6fa9b01bb963c33e630f78eeefc536f0ea18493c909ad582b0bbe668a2/userdata/shm DeviceMajor:0 DeviceMinor:724 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/60b98cd864b31c5d3ce33f7e617eaf280215ab256f27e08a3aa813b955cd4550/userdata/shm DeviceMajor:0 DeviceMinor:779 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/eb778c86-ea51-4eab-82b8-a8e0bec0f050/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1000 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-171 DeviceMajor:0 DeviceMinor:171 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4d5479f3-51ec-4b93-8188-21cdda44828d/volumes/kubernetes.io~projected/kube-api-access-j6xlb Devic Mar 13 10:56:56.656734 master-0 kubenswrapper[33013]: eMajor:0 DeviceMinor:224 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5ed5e77b-948b-4d94-ac9f-440ee3c07e18/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:302 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:486 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-542 DeviceMajor:0 DeviceMinor:542 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eb778c86-ea51-4eab-82b8-a8e0bec0f050/volumes/kubernetes.io~projected/kube-api-access-hkdfn DeviceMajor:0 DeviceMinor:1002 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-977 DeviceMajor:0 DeviceMinor:977 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1085 DeviceMajor:0 DeviceMinor:1085 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-527 DeviceMajor:0 DeviceMinor:527 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:816 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1115 DeviceMajor:0 DeviceMinor:1115 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-742 DeviceMajor:0 DeviceMinor:742 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a1a998af-4fc0-4078-a6a0-93dde6c00508/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:241 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/04e2d5b4e65ad4d6e19280743e00933d366bcbdfdc3c5d7c64aba41673f1a662/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-383 DeviceMajor:0 DeviceMinor:383 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b/volumes/kubernetes.io~projected/kube-api-access-tdgld DeviceMajor:0 DeviceMinor:475 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1054 DeviceMajor:0 DeviceMinor:1054 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dd93ec4fe47e71fd21c0051085976706d225fa5cba2fcde1e22ce417bdc6d6e7/userdata/shm DeviceMajor:0 DeviceMinor:593 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-353 DeviceMajor:0 DeviceMinor:353 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-796 DeviceMajor:0 DeviceMinor:796 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-860 DeviceMajor:0 DeviceMinor:860 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2b19f149420c8c5bdd28117ec0014c144ba254d289aeb742b7f29c424c5d661a/userdata/shm DeviceMajor:0 DeviceMinor:1078 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/14f6e3b2-716c-4392-b3c8-75b2168ccfb7/volumes/kubernetes.io~projected/kube-api-access-gh6kl DeviceMajor:0 DeviceMinor:579 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d019d509921c4d166cd7651a1a35a29172d4e8a0b6f47b7d8c8b1a18d02dbf3c/userdata/shm DeviceMajor:0 DeviceMinor:1167 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-577 DeviceMajor:0 DeviceMinor:577 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:474 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1068 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1082 DeviceMajor:0 DeviceMinor:1082 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1092 DeviceMajor:0 DeviceMinor:1092 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c797020833454d5ed2c33acc860a0f30fce513778328e3b025208a981e1fff3f/userdata/shm DeviceMajor:0 DeviceMinor:74 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a1a998af-4fc0-4078-a6a0-93dde6c00508/volumes/kubernetes.io~projected/kube-api-access-p29zg DeviceMajor:0 DeviceMinor:244 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3ccf2b15838415a4be52f403df345301db18d66d37f6fa09df717882bb3b0fda/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/50614fe1bae99eef2fccbbf06f52ab65208692c910cfe5fe3711fe68d7b32786/userdata/shm DeviceMajor:0 DeviceMinor:460 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-930 DeviceMajor:0 DeviceMinor:930 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/427d8baf8b36c464fef89b4b9363187b5106a9ee18a5220827b5f1bf40b93c0d/userdata/shm DeviceMajor:0 DeviceMinor:330 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/66f49a19-0e3b-4611-b8a6-5f5687fa20b6/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:445 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/79bb87a4-8834-4c73-834e-356ccc1f7f9b/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:447 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1038 DeviceMajor:0 DeviceMinor:1038 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d8af021-f20f-48a2-8b2a-3a5a3f37237f/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:772 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-372 DeviceMajor:0 DeviceMinor:372 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-819 DeviceMajor:0 DeviceMinor:819 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6ed47c57-533f-43e4-88eb-07da29b4878f/volumes/kubernetes.io~projected/kube-api-access-rjk5l DeviceMajor:0 DeviceMinor:233 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b10584c2-ef04-4649-bcb6-9222c9530c3f/volumes/kubernetes.io~projected/kube-api-access-zsswm DeviceMajor:0 DeviceMinor:482 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-354 DeviceMajor:0 DeviceMinor:354 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-413 DeviceMajor:0 DeviceMinor:413 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-360 DeviceMajor:0 DeviceMinor:360 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1023 DeviceMajor:0 DeviceMinor:1023 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d288e5d0-0976-477f-be14-b3d5828e0482/volumes/kubernetes.io~projected/kube-api-access-5k8rp DeviceMajor:0 DeviceMinor:364 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-587 DeviceMajor:0 DeviceMinor:587 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cf9561f8a446435dd3e05b7973785f1768a9224b0e43a36e35e60c9ec1bc16a2/userdata/shm DeviceMajor:0 DeviceMinor:340 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-981 DeviceMajor:0 DeviceMinor:981 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-386 DeviceMajor:0 DeviceMinor:386 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-678 DeviceMajor:0 DeviceMinor:678 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1205 DeviceMajor:0 DeviceMinor:1205 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/containers/storage/overlay-containers/97a747ef867987de8a139981f17a1b239fcb5c28199b67ab78094a7f8154dc7c/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3a7d2e60ddc43ee697baaf390993508cae16887b2a5d4cb1ef47d6c884025855/userdata/shm DeviceMajor:0 DeviceMinor:784 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c7ee7c0157ea1a3aa7abb463c26a21b9f9c80a7c51726be3dad9c08112783426/userdata/shm DeviceMajor:0 DeviceMinor:290 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1434c4a2-5c4d-478a-a16a-7d6a52ea3099/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6b0b21ce8c91e31c5d3fafde2dc1d7d9feb5cca70a9bf65bb781c974d266575e/userdata/shm DeviceMajor:0 DeviceMinor:328 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-942 DeviceMajor:0 DeviceMinor:942 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5448b59a-b731-45a3-9ded-d25315f597fb/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1064 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-614 DeviceMajor:0 DeviceMinor:614 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-812 DeviceMajor:0 DeviceMinor:812 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eb778c86-ea51-4eab-82b8-a8e0bec0f050/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:993 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9a02e284386b73dcacdc66689703a6ce2a89d3ae22d94162ffdd3488c53d3335/userdata/shm DeviceMajor:0 DeviceMinor:308 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:473 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/60e17cd1-c520-4d8d-8c72-47bf73b8cc66/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:863 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1080 DeviceMajor:0 DeviceMinor:1080 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-417 DeviceMajor:0 DeviceMinor:417 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/64e301f64932b9e42866a17f98ce668f6dac597e77b8c15551a291086a0c377b/userdata/shm DeviceMajor:0 DeviceMinor:490 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-901 DeviceMajor:0 DeviceMinor:901 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/05a72a4c-5ce8-49d1-8e4f-334f63d4e987/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:559 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/574bf255-14b3-40af-b240-2d3abd5b86b8/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:209 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9d325882d9051ed4dcae015a4e37aab1ae44cf25e837fbca8dbbfcfc4d9934e5/userdata/shm DeviceMajor:0 DeviceMinor:458 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-695 DeviceMajor:0 DeviceMinor:695 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8cf9326b-bc23-45c2-82c4-9c08c739ac5a/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:220 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f43bd68b145b0d4f8b86d52ece37d2ddf197260fbbf0dee345fc0c4e0be32ff/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-344 DeviceMajor:0 DeviceMinor:344 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/37b2e803-302b-4650-b18f-d3d2dd703bd5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:242 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-381 DeviceMajor:0 DeviceMinor:381 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d9ff345f3e6004990e637fa6bd4c1c17fad38322042b096639037cf7570053ac/userdata/shm DeviceMajor:0 DeviceMinor:722 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/484e6d0b-d057-4658-8e49-bbe7e6f6ee86/volumes/kubernetes.io~projected/kube-api-access-qbdwm DeviceMajor:0 DeviceMinor:735 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-883 DeviceMajor:0 DeviceMinor:883 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c455a959-d764-4b4f-a1e0-95c27495dd9d/volumes/kubernetes.io~projected/kube-api-access-2cpdn DeviceMajor:0 DeviceMinor:229 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/86774fd7-7c26-4b41-badb-de1004397637/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:754 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d0f42a72-24c7-49e6-8edb-97b2b0d6183a/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:815 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-916 DeviceMajor:0 DeviceMinor:916 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1017 DeviceMajor:0 DeviceMinor:1017 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1185 DeviceMajor:0 DeviceMinor:1185 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/11fee05d2806af61a462a41f2f8d14b1a8fc382251199b04114ff9afb908d5a1/userdata/shm DeviceMajor:0 DeviceMinor:831 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:746 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4df756f0-c6b6-4730-842a-7ee9227397ae/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1027 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8/volumes/kubernetes.io~projected/kube-api-access-qdg6f DeviceMajor:0 DeviceMinor:1071 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-397 DeviceMajor:0 DeviceMinor:397 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~secret/telemeter-client-tls DeviceMajor:0 DeviceMinor:594 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5e13cffe1976b1fe526e31bded64fe9c448e434d19da41cefd16fb763080f8bc/userdata/shm DeviceMajor:0 DeviceMinor:791 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c/volumes/kubernetes.io~projected/kube-api-access-j25nl DeviceMajor:0 DeviceMinor:885 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-162 DeviceMajor:0 DeviceMinor:162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/939a3da3-62e7-4376-853d-dc333465446c/volumes/kubernetes.io~projected/kube-api-access-t2q2f DeviceMajor:0 DeviceMinor:696 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/257a4a8b-014c-4473-80a0-e95cf6d41bf1/volumes/kubernetes.io~projected/kube-api-access-hzv5v DeviceMajor:0 DeviceMinor:481 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-409 DeviceMajor:0 DeviceMinor:409 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1040 DeviceMajor:0 DeviceMinor:1040 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/186ea687b2b873b969d378ad858dd467c244f19248903ea4dfa2320cfbb636aa/userdata/shm DeviceMajor:0 DeviceMinor:259 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-467 DeviceMajor:0 DeviceMinor:467 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d72d950-cfb4-4ed5-9ad6-f7266b937493/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:477 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-524 DeviceMajor:0 DeviceMinor:524 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d72d950-cfb4-4ed5-9ad6-f7266b937493/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:478 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4e6ecc16-19cb-4b66-801f-b958b10d0ce7/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:748 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:580 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-703 DeviceMajor:0 DeviceMinor:703 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-557 DeviceMajor:0 DeviceMinor:557 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-803 DeviceMajor:0 DeviceMinor:803 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:765 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-878 DeviceMajor:0 DeviceMinor:878 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-390 DeviceMajor:0 DeviceMinor:390 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0ac1a605-d2d5-4004-96f5-121c20555bde/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:721 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/eb778c86-ea51-4eab-82b8-a8e0bec0f050/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:992 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/79bb87a4-8834-4c73-834e-356ccc1f7f9b/volumes/kubernetes.io~projected/kube-api-access-56qz6 DeviceMajor:0 DeviceMinor:123 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8b123d8cf30f9a2e585e105a6d1e6a093488b477d996f0893a6a50a5c5b92b38/userdata/shm DeviceMajor:0 DeviceMinor:487 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-549 DeviceMajor:0 DeviceMinor:549 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3d5f1f4095f01f19dc1c943afe4e4b0c9a80883c821d2a0dcacc2ad4ee4f8b25/userdata/shm DeviceMajor:0 DeviceMinor:1004 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-95 DeviceMajor:0 DeviceMinor:95 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-821 DeviceMajor:0 DeviceMinor:821 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c7d9738c3adc0c979eef42141f9dc2b629b15190348d5c5364a237fdd93a9dff/userdata/shm DeviceMajor:0 DeviceMinor:93 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5843b0d4-a538-4261-b425-598e318c9d07/volumes/kubernetes.io~projected/kube-api-access-r6nnz DeviceMajor:0 DeviceMinor:118 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7667717b-fb74-456b-8615-16475cb69e98/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:246 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-636 DeviceMajor:0 DeviceMinor:636 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-824 DeviceMajor:0 DeviceMinor:824 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c3ef257d3865e4ef11b927a21a93e51aafc4c9ebd98baa7d651806b2a01e30df/userdata/shm DeviceMajor:0 DeviceMinor:236 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-871 DeviceMajor:0 DeviceMinor:871 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-427 DeviceMajor:0 DeviceMinor:427 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-544 DeviceMajor:0 DeviceMinor:544 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c455a959-d764-4b4f-a1e0-95c27495dd9d/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:443 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a283bd1cab37da2c35528d1fc1a0a03b24555657ec54c53a6d0fcce5a530df6a/userdata/shm DeviceMajor:0 DeviceMinor:792 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1012 DeviceMajor:0 DeviceMinor:1012 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1196 DeviceMajor:0 DeviceMinor:1196 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a305f45-8689-45a8-8c8b-5954f2c863df/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:446 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-641 DeviceMajor:0 DeviceMinor:641 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/14f6e3b2-716c-4392-b3c8-75b2168ccfb7/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1164 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2d8c2c573acc02ece57d91166be062a427bbc681f8936d54a20df38a4936dc09/userdata/shm DeviceMajor:0 DeviceMinor:322 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:04e2d5b4e65ad4d MacAddress:e6:63:9f:17:e4:b8 Speed:10000 Mtu:8900} {Name:0703b273c4e03bb MacAddress:46:d4:b2:9b:c0:d2 Speed:10000 Mtu:8900} {Name:09802d7d0a05bcc MacAddress:5a:df:4a:57:a1:93 Speed:10000 Mtu:8900} {Name:11fee05d2806af6 MacAddress:e6:89:d4:d2:73:72 Speed:10000 Mtu:8900} {Name:13a004f2f44b204 MacAddress:f6:7d:ec:8c:c1:93 Speed:10000 Mtu:8900} {Name:164736e7418a21c MacAddress:2a:31:2e:4e:dc:18 Speed:10000 Mtu:8900} {Name:186ea687b2b873b MacAddress:12:81:ce:84:f2:49 Speed:10000 Mtu:8900} {Name:1ee97873740b9b1 MacAddress:c2:b2:55:ff:1b:79 Speed:10000 Mtu:8900} {Name:21ea23db5a94394 MacAddress:f6:9c:e0:56:b6:65 Speed:10000 Mtu:8900} {Name:2b19f149420c8c5 MacAddress:6e:0f:a7:26:13:07 Speed:10000 Mtu:8900} {Name:2baa20e270e178f MacAddress:86:c3:a7:42:bd:7c Speed:10000 Mtu:8900} {Name:2d8c2c573acc02e MacAddress:2e:73:2f:f2:fb:2f Speed:10000 Mtu:8900} {Name:324185d8aba3ef3 MacAddress:9e:f4:4a:14:3d:60 Speed:10000 Mtu:8900} {Name:326eff89f13ef64 MacAddress:d6:0a:ad:cd:f8:0b Speed:10000 Mtu:8900} {Name:34c705593dd5772 MacAddress:42:91:9d:f4:b5:1b Speed:10000 Mtu:8900} {Name:3a76099d084f4d1 MacAddress:7e:a4:fd:64:d3:c3 Speed:10000 Mtu:8900} {Name:3a7d2e60ddc43ee MacAddress:22:cf:97:a7:30:aa Speed:10000 Mtu:8900} {Name:3ae050b82d12b25 MacAddress:82:d9:17:e3:a5:bc Speed:10000 Mtu:8900} {Name:3d5f1f4095f01f1 MacAddress:7a:3b:4c:55:8c:4c Speed:10000 Mtu:8900} {Name:40709622bc83dd4 MacAddress:12:85:f4:02:05:2d Speed:10000 Mtu:8900} {Name:40bc8729edbc545 MacAddress:2e:04:5b:fa:6e:90 Speed:10000 Mtu:8900} {Name:48e8b214299b9f4 MacAddress:2a:45:b2:50:73:7f Speed:10000 Mtu:8900} {Name:50614fe1bae99ee MacAddress:8a:87:ea:78:12:99 Speed:10000 Mtu:8900} {Name:52b02d0e4a7c2c4 MacAddress:ee:c4:e8:7a:92:1f Speed:10000 Mtu:8900} {Name:571b031f77274fe MacAddress:fe:08:68:d5:1f:c3 Speed:10000 Mtu:8900} {Name:5e13cffe1976b1f MacAddress:8a:0b:3f:f5:bc:d8 Speed:10000 Mtu:8900} {Name:64e301f64932b9e MacAddress:be:11:fa:5e:81:24 Speed:10000 Mtu:8900} {Name:6ae68534d60ba95 MacAddress:b2:97:dc:e0:22:11 Speed:10000 Mtu:8900} {Name:6b0b21ce8c91e31 MacAddress:52:5c:df:52:cc:80 Speed:10000 Mtu:8900} {Name:7cf1f1393ed4dc7 MacAddress:f2:5a:07:76:3f:29 Speed:10000 Mtu:8900} {Name:823dd75fa900673 MacAddress:42:cd:ad:b4:33:ed Speed:10000 Mtu:8900} {Name:8b123d8cf30f9a2 MacAddress:0e:51:65:33:42:7c Speed:10000 Mtu:8900} {Name:8b939255ebac1f6 MacAddress:e6:ca:b1:79:6c:bb Speed:10000 Mtu:8900} {Name:8f00c30651131dc MacAddress:c6:73:54:b5:7d:a1 Speed:10000 Mtu:8900} {Name:8f43bd68b145b0d MacAddress:fa:17:4c:92:fc:ab Speed:10000 Mtu:8900} {Name:981440e84066752 MacAddress:5e:ff:74:d1:1d:15 Speed:10000 Mtu:8900} {Name:9a02e284386b73d MacAddress:6e:9c:7c:08:40:57 Speed:10000 Mtu:8900} {Name:9c60be2c4f1a0c6 MacAddress:ce:b2:2d:8b:2e:02 Speed:10000 Mtu:8900} {Name:9d325882d9051ed MacAddress:ca:e0:ab:f2:9e:ea Speed:10000 Mtu:8900} {Name:a283bd1cab37da2 MacAddress:76:9a:ce:37:08:43 Speed:10000 Mtu:8900} {Name:a9a8fe1cfe7b05b MacAddress:fe:a8:9c:cd:e2:2a Speed:10000 Mtu:8900} {Name:b307d791edfe64a MacAddress:76:e3:ca:e9:de:6a Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:a2:91:ee:2b:8e:24 Speed:0 Mtu:8900} {Name:c3ef257d3865e4e MacAddress:6e:65:5e:3f:c9:33 Speed:10000 Mtu:8900} {Name:cc41129be016cbf MacAddress:da:62:d3:2d:50:16 Speed:10000 Mtu:8900} {Name:cf9561f8a446435 MacAddress:36:11:0d:32:81:9b Speed:10000 Mtu:8900} {Name:cfcbc3062b54d8a MacAddress:1e:3e:a8:5a:5e:aa Speed:10000 Mtu:8900} {Name:d019d509921c4d1 MacAddress:1a:87:74:73:77:11 Speed:10000 Mtu:8900} {Name:d32de03a3a8ddd9 MacAddress:3a:40:4b:69:29:48 Speed:10000 Mtu:8900} {Name:d4855d0948cb056 MacAddress:26:42:17:23:a4:72 Speed:10000 Mtu:8900} {Name:d56ffe6fa9b01bb MacAddress:46:ab:f1:97:c5:2c Speed:10000 Mtu:8900} {Name:d5d5f29010412c6 MacAddress:7a:75:2c:c1:f8:9c Speed:10000 Mtu:8900} {Name:d9ff345f3e60049 MacAddress:4e:4d:9f:18:c2:52 Speed:10000 Mtu:8900} {Name:dd93ec4fe47e71f MacAddress:de:46:ea:f8:ca:28 Speed:10000 Mtu:8900} {Name:e1e2b9079d6118c MacAddress:d2:a8:9f:03:4c:c5 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:e1:20:b5 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:f6:43:7d Speed:-1 Mtu:9000} {Name:f2d2633be257a3a MacAddress:e6:2d:a9:a6:35:3c Speed:10000 Mtu:8900} {Name:f4e5ae9525fe603 MacAddress:c2:c4:e8:13:bd:f2 Speed:10000 Mtu:8900} {Name:fcd78f90ad99c24 MacAddress:12:87:d9:90:45:8d Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:fe:76:e9:ad:1f:61 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 13 10:56:56.656734 master-0 kubenswrapper[33013]: I0313 10:56:56.656201 33013 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 13 10:56:56.656734 master-0 kubenswrapper[33013]: I0313 10:56:56.656272 33013 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 13 10:56:56.656734 master-0 kubenswrapper[33013]: I0313 10:56:56.656553 33013 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 13 10:56:56.657037 master-0 kubenswrapper[33013]: I0313 10:56:56.656728 33013 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 10:56:56.657037 master-0 kubenswrapper[33013]: I0313 10:56:56.656762 33013 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 10:56:56.657037 master-0 kubenswrapper[33013]: I0313 10:56:56.656947 33013 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 10:56:56.657037 master-0 kubenswrapper[33013]: I0313 10:56:56.656958 33013 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 10:56:56.657037 master-0 kubenswrapper[33013]: I0313 10:56:56.656966 33013 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 10:56:56.657037 master-0 kubenswrapper[33013]: I0313 10:56:56.656989 33013 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 10:56:56.657037 master-0 kubenswrapper[33013]: I0313 10:56:56.657022 33013 state_mem.go:36] "Initialized new in-memory state store" Mar 13 10:56:56.657327 master-0 kubenswrapper[33013]: I0313 10:56:56.657115 33013 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 13 10:56:56.657327 master-0 kubenswrapper[33013]: I0313 10:56:56.657179 33013 kubelet.go:418] "Attempting to sync node with API server" Mar 13 10:56:56.657327 master-0 kubenswrapper[33013]: I0313 10:56:56.657192 33013 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 10:56:56.657327 master-0 kubenswrapper[33013]: I0313 10:56:56.657206 33013 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 13 10:56:56.657327 master-0 kubenswrapper[33013]: I0313 10:56:56.657219 33013 kubelet.go:324] "Adding apiserver pod source" Mar 13 10:56:56.657327 master-0 kubenswrapper[33013]: I0313 10:56:56.657240 33013 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 10:56:56.658566 master-0 kubenswrapper[33013]: I0313 10:56:56.658520 33013 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 13 10:56:56.658882 master-0 kubenswrapper[33013]: I0313 10:56:56.658861 33013 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 13 10:56:56.659257 master-0 kubenswrapper[33013]: I0313 10:56:56.659216 33013 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 13 10:56:56.659392 master-0 kubenswrapper[33013]: I0313 10:56:56.659372 33013 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 13 10:56:56.659424 master-0 kubenswrapper[33013]: I0313 10:56:56.659397 33013 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 13 10:56:56.659452 master-0 kubenswrapper[33013]: I0313 10:56:56.659425 33013 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 13 10:56:56.659452 master-0 kubenswrapper[33013]: I0313 10:56:56.659437 33013 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 13 10:56:56.659452 master-0 kubenswrapper[33013]: I0313 10:56:56.659445 33013 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 13 10:56:56.659530 master-0 kubenswrapper[33013]: I0313 10:56:56.659453 33013 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 13 10:56:56.659530 master-0 kubenswrapper[33013]: I0313 10:56:56.659461 33013 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 13 10:56:56.659530 master-0 kubenswrapper[33013]: I0313 10:56:56.659469 33013 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 13 10:56:56.659530 master-0 kubenswrapper[33013]: I0313 10:56:56.659479 33013 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 13 10:56:56.659530 master-0 kubenswrapper[33013]: I0313 10:56:56.659488 33013 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 13 10:56:56.659530 master-0 kubenswrapper[33013]: I0313 10:56:56.659499 33013 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 13 10:56:56.659530 master-0 kubenswrapper[33013]: I0313 10:56:56.659514 33013 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 13 10:56:56.659730 master-0 kubenswrapper[33013]: I0313 10:56:56.659556 33013 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 13 10:56:56.660036 master-0 kubenswrapper[33013]: I0313 10:56:56.660013 33013 server.go:1280] "Started kubelet" Mar 13 10:56:56.662069 master-0 kubenswrapper[33013]: I0313 10:56:56.661811 33013 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 10:56:56.662313 master-0 kubenswrapper[33013]: I0313 10:56:56.662154 33013 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 10:56:56.662313 master-0 kubenswrapper[33013]: I0313 10:56:56.662265 33013 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 13 10:56:56.663609 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 13 10:56:56.663968 master-0 kubenswrapper[33013]: I0313 10:56:56.663675 33013 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 10:56:56.671327 master-0 kubenswrapper[33013]: I0313 10:56:56.667160 33013 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 10:56:56.671327 master-0 kubenswrapper[33013]: I0313 10:56:56.667205 33013 server.go:449] "Adding debug handlers to kubelet server" Mar 13 10:56:56.676684 master-0 kubenswrapper[33013]: I0313 10:56:56.676626 33013 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 10:56:56.681909 master-0 kubenswrapper[33013]: I0313 10:56:56.681821 33013 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 13 10:56:56.682126 master-0 kubenswrapper[33013]: I0313 10:56:56.681953 33013 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 10:56:56.682769 master-0 kubenswrapper[33013]: I0313 10:56:56.682558 33013 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 13 10:56:56.682769 master-0 kubenswrapper[33013]: I0313 10:56:56.682604 33013 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 13 10:56:56.685908 master-0 kubenswrapper[33013]: I0313 10:56:56.685151 33013 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 13 10:56:56.685908 master-0 kubenswrapper[33013]: I0313 10:56:56.682514 33013 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 10:24:19 +0000 UTC, rotation deadline is 2026-03-14 04:36:34.223308368 +0000 UTC Mar 13 10:56:56.686913 master-0 kubenswrapper[33013]: I0313 10:56:56.686158 33013 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h39m37.537920448s for next certificate rotation Mar 13 10:56:56.688002 master-0 kubenswrapper[33013]: I0313 10:56:56.687947 33013 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 10:56:56.688153 master-0 kubenswrapper[33013]: I0313 10:56:56.688114 33013 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 13 10:56:56.688153 master-0 kubenswrapper[33013]: I0313 10:56:56.688144 33013 factory.go:55] Registering systemd factory Mar 13 10:56:56.688153 master-0 kubenswrapper[33013]: I0313 10:56:56.688152 33013 factory.go:221] Registration of the systemd container factory successfully Mar 13 10:56:56.688715 master-0 kubenswrapper[33013]: I0313 10:56:56.688677 33013 factory.go:153] Registering CRI-O factory Mar 13 10:56:56.688715 master-0 kubenswrapper[33013]: I0313 10:56:56.688702 33013 factory.go:221] Registration of the crio container factory successfully Mar 13 10:56:56.688830 master-0 kubenswrapper[33013]: I0313 10:56:56.688738 33013 factory.go:103] Registering Raw factory Mar 13 10:56:56.688830 master-0 kubenswrapper[33013]: I0313 10:56:56.688757 33013 manager.go:1196] Started watching for new ooms in manager Mar 13 10:56:56.690151 master-0 kubenswrapper[33013]: I0313 10:56:56.690127 33013 manager.go:319] Starting recovery of all containers Mar 13 10:56:56.693520 master-0 kubenswrapper[33013]: E0313 10:56:56.693473 33013 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 13 10:56:56.699245 master-0 kubenswrapper[33013]: I0313 10:56:56.699169 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c87545aa-11c2-4e6e-8c13-16eeff3be83b" volumeName="kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-service-ca-bundle" seLinuxMountContext="" Mar 13 10:56:56.699440 master-0 kubenswrapper[33013]: I0313 10:56:56.699414 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3" volumeName="kubernetes.io/secret/a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3-tls-certificates" seLinuxMountContext="" Mar 13 10:56:56.699531 master-0 kubenswrapper[33013]: I0313 10:56:56.699512 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b10584c2-ef04-4649-bcb6-9222c9530c3f" volumeName="kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-kube-api-access-zsswm" seLinuxMountContext="" Mar 13 10:56:56.699761 master-0 kubenswrapper[33013]: I0313 10:56:56.699737 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec3168fc-6c8f-4603-94e0-17b1ae22a802" volumeName="kubernetes.io/projected/ec3168fc-6c8f-4603-94e0-17b1ae22a802-kube-api-access" seLinuxMountContext="" Mar 13 10:56:56.699852 master-0 kubenswrapper[33013]: I0313 10:56:56.699835 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="070b85a0-f076-4750-aa00-dabba401dc75" volumeName="kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-images" seLinuxMountContext="" Mar 13 10:56:56.699953 master-0 kubenswrapper[33013]: I0313 10:56:56.699935 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8" volumeName="kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 13 10:56:56.700036 master-0 kubenswrapper[33013]: I0313 10:56:56.700020 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1a998af-4fc0-4078-a6a0-93dde6c00508" volumeName="kubernetes.io/configmap/a1a998af-4fc0-4078-a6a0-93dde6c00508-config" seLinuxMountContext="" Mar 13 10:56:56.700232 master-0 kubenswrapper[33013]: I0313 10:56:56.700212 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33" volumeName="kubernetes.io/secret/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-cert" seLinuxMountContext="" Mar 13 10:56:56.700340 master-0 kubenswrapper[33013]: I0313 10:56:56.700320 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1edde4bf-4554-4ab2-b588-513ad84a9bae" volumeName="kubernetes.io/projected/1edde4bf-4554-4ab2-b588-513ad84a9bae-kube-api-access-kxkl8" seLinuxMountContext="" Mar 13 10:56:56.700481 master-0 kubenswrapper[33013]: I0313 10:56:56.700462 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42b4d53c-af72-44c8-9605-271445f95f87" volumeName="kubernetes.io/configmap/42b4d53c-af72-44c8-9605-271445f95f87-trusted-ca" seLinuxMountContext="" Mar 13 10:56:56.700598 master-0 kubenswrapper[33013]: I0313 10:56:56.700565 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="574bf255-14b3-40af-b240-2d3abd5b86b8" volumeName="kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-config" seLinuxMountContext="" Mar 13 10:56:56.700688 master-0 kubenswrapper[33013]: I0313 10:56:56.700670 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5843b0d4-a538-4261-b425-598e318c9d07" volumeName="kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-sysctl-allowlist" seLinuxMountContext="" Mar 13 10:56:56.700772 master-0 kubenswrapper[33013]: I0313 10:56:56.700756 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" volumeName="kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles" seLinuxMountContext="" Mar 13 10:56:56.700861 master-0 kubenswrapper[33013]: I0313 10:56:56.700844 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1edde4bf-4554-4ab2-b588-513ad84a9bae" volumeName="kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-apiservice-cert" seLinuxMountContext="" Mar 13 10:56:56.700961 master-0 kubenswrapper[33013]: I0313 10:56:56.700944 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42b4d53c-af72-44c8-9605-271445f95f87" volumeName="kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert" seLinuxMountContext="" Mar 13 10:56:56.701054 master-0 kubenswrapper[33013]: I0313 10:56:56.701035 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec121f87-93ea-468c-a25f-2ec5e7d0e0ee" volumeName="kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-config" seLinuxMountContext="" Mar 13 10:56:56.701159 master-0 kubenswrapper[33013]: I0313 10:56:56.701138 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="282bc9ff-1bc0-421b-9cd3-d88d7c5e5303" volumeName="kubernetes.io/secret/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.701250 master-0 kubenswrapper[33013]: I0313 10:56:56.701233 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df756f0-c6b6-4730-842a-7ee9227397ae" volumeName="kubernetes.io/projected/4df756f0-c6b6-4730-842a-7ee9227397ae-kube-api-access-8k4c5" seLinuxMountContext="" Mar 13 10:56:56.701334 master-0 kubenswrapper[33013]: I0313 10:56:56.701318 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9624a9a-68dd-4cc1-a0a4-23fe297ceba3" volumeName="kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-env-overrides" seLinuxMountContext="" Mar 13 10:56:56.701424 master-0 kubenswrapper[33013]: I0313 10:56:56.701407 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec121f87-93ea-468c-a25f-2ec5e7d0e0ee" volumeName="kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-auth-proxy-config" seLinuxMountContext="" Mar 13 10:56:56.701515 master-0 kubenswrapper[33013]: I0313 10:56:56.701498 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b" volumeName="kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-audit-policies" seLinuxMountContext="" Mar 13 10:56:56.703113 master-0 kubenswrapper[33013]: I0313 10:56:56.703083 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e6ecc16-19cb-4b66-801f-b958b10d0ce7" volumeName="kubernetes.io/secret/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.703220 master-0 kubenswrapper[33013]: I0313 10:56:56.703201 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0f42a72-24c7-49e6-8edb-97b2b0d6183a" volumeName="kubernetes.io/secret/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-proxy-tls" seLinuxMountContext="" Mar 13 10:56:56.703323 master-0 kubenswrapper[33013]: I0313 10:56:56.703305 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df756f0-c6b6-4730-842a-7ee9227397ae" volumeName="kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-certs" seLinuxMountContext="" Mar 13 10:56:56.703423 master-0 kubenswrapper[33013]: I0313 10:56:56.703406 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a305f45-8689-45a8-8c8b-5954f2c863df" volumeName="kubernetes.io/projected/8a305f45-8689-45a8-8c8b-5954f2c863df-kube-api-access-zp6pp" seLinuxMountContext="" Mar 13 10:56:56.703556 master-0 kubenswrapper[33013]: I0313 10:56:56.703537 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f87662b9-6ac6-44f3-8a16-ff858c2baa91" volumeName="kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-env-overrides" seLinuxMountContext="" Mar 13 10:56:56.703712 master-0 kubenswrapper[33013]: I0313 10:56:56.703667 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9075a44-22d3-4562-819e-d5a92f013663" volumeName="kubernetes.io/empty-dir/d9075a44-22d3-4562-819e-d5a92f013663-tmp" seLinuxMountContext="" Mar 13 10:56:56.703812 master-0 kubenswrapper[33013]: I0313 10:56:56.703795 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d72d950-cfb4-4ed5-9ad6-f7266b937493" volumeName="kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-etcd-client" seLinuxMountContext="" Mar 13 10:56:56.703902 master-0 kubenswrapper[33013]: I0313 10:56:56.703883 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df756f0-c6b6-4730-842a-7ee9227397ae" volumeName="kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-node-bootstrap-token" seLinuxMountContext="" Mar 13 10:56:56.703986 master-0 kubenswrapper[33013]: I0313 10:56:56.703969 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" volumeName="kubernetes.io/projected/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-kube-api-access-f5656" seLinuxMountContext="" Mar 13 10:56:56.704072 master-0 kubenswrapper[33013]: I0313 10:56:56.704054 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a05e72d-836f-40e0-8a5c-ee02dce494b3" volumeName="kubernetes.io/empty-dir/2a05e72d-836f-40e0-8a5c-ee02dce494b3-utilities" seLinuxMountContext="" Mar 13 10:56:56.704165 master-0 kubenswrapper[33013]: I0313 10:56:56.704146 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8" volumeName="kubernetes.io/projected/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-api-access-qdg6f" seLinuxMountContext="" Mar 13 10:56:56.704260 master-0 kubenswrapper[33013]: I0313 10:56:56.704243 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37b2e803-302b-4650-b18f-d3d2dd703bd5" volumeName="kubernetes.io/configmap/37b2e803-302b-4650-b18f-d3d2dd703bd5-config" seLinuxMountContext="" Mar 13 10:56:56.704345 master-0 kubenswrapper[33013]: I0313 10:56:56.704327 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8d40b37-0f3d-4531-9fa8-eda965d2337d" volumeName="kubernetes.io/empty-dir/b8d40b37-0f3d-4531-9fa8-eda965d2337d-operand-assets" seLinuxMountContext="" Mar 13 10:56:56.704429 master-0 kubenswrapper[33013]: I0313 10:56:56.704413 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f87662b9-6ac6-44f3-8a16-ff858c2baa91" volumeName="kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert" seLinuxMountContext="" Mar 13 10:56:56.704519 master-0 kubenswrapper[33013]: I0313 10:56:56.704501 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9db15a-8854-485b-9863-9cbe5dddd977" volumeName="kubernetes.io/configmap/8f9db15a-8854-485b-9863-9cbe5dddd977-config" seLinuxMountContext="" Mar 13 10:56:56.704621 master-0 kubenswrapper[33013]: I0313 10:56:56.704603 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c87545aa-11c2-4e6e-8c13-16eeff3be83b" volumeName="kubernetes.io/secret/c87545aa-11c2-4e6e-8c13-16eeff3be83b-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.704731 master-0 kubenswrapper[33013]: I0313 10:56:56.704714 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33" volumeName="kubernetes.io/projected/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-kube-api-access-bt7hs" seLinuxMountContext="" Mar 13 10:56:56.704824 master-0 kubenswrapper[33013]: I0313 10:56:56.704807 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48f99840-4d9e-49c5-819e-0bb15493feb5" volumeName="kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-images" seLinuxMountContext="" Mar 13 10:56:56.704907 master-0 kubenswrapper[33013]: I0313 10:56:56.704890 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5448b59a-b731-45a3-9ded-d25315f597fb" volumeName="kubernetes.io/projected/5448b59a-b731-45a3-9ded-d25315f597fb-kube-api-access-d8kvd" seLinuxMountContext="" Mar 13 10:56:56.704999 master-0 kubenswrapper[33013]: I0313 10:56:56.704980 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939a3da3-62e7-4376-853d-dc333465446c" volumeName="kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-telemeter-trusted-ca-bundle" seLinuxMountContext="" Mar 13 10:56:56.705085 master-0 kubenswrapper[33013]: I0313 10:56:56.705068 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2195f7be-b41e-4ae2-b737-d5782e0d41a8" volumeName="kubernetes.io/projected/2195f7be-b41e-4ae2-b737-d5782e0d41a8-kube-api-access-r657p" seLinuxMountContext="" Mar 13 10:56:56.705179 master-0 kubenswrapper[33013]: I0313 10:56:56.705162 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ed47c57-533f-43e4-88eb-07da29b4878f" volumeName="kubernetes.io/secret/6ed47c57-533f-43e4-88eb-07da29b4878f-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.705264 master-0 kubenswrapper[33013]: I0313 10:56:56.705247 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9da11462-a91d-4d02-8614-78b4c5b2f7e2" volumeName="kubernetes.io/projected/9da11462-a91d-4d02-8614-78b4c5b2f7e2-kube-api-access-hp847" seLinuxMountContext="" Mar 13 10:56:56.705356 master-0 kubenswrapper[33013]: I0313 10:56:56.705338 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9db15a-8854-485b-9863-9cbe5dddd977" volumeName="kubernetes.io/projected/8f9db15a-8854-485b-9863-9cbe5dddd977-kube-api-access" seLinuxMountContext="" Mar 13 10:56:56.705453 master-0 kubenswrapper[33013]: I0313 10:56:56.705429 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb778c86-ea51-4eab-82b8-a8e0bec0f050" volumeName="kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-metrics-certs" seLinuxMountContext="" Mar 13 10:56:56.705545 master-0 kubenswrapper[33013]: I0313 10:56:56.705528 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" volumeName="kubernetes.io/secret/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.705661 master-0 kubenswrapper[33013]: I0313 10:56:56.705643 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60e17cd1-c520-4d8d-8c72-47bf73b8cc66" volumeName="kubernetes.io/secret/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-proxy-tls" seLinuxMountContext="" Mar 13 10:56:56.705775 master-0 kubenswrapper[33013]: I0313 10:56:56.705732 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ed47c57-533f-43e4-88eb-07da29b4878f" volumeName="kubernetes.io/projected/6ed47c57-533f-43e4-88eb-07da29b4878f-kube-api-access-rjk5l" seLinuxMountContext="" Mar 13 10:56:56.705900 master-0 kubenswrapper[33013]: I0313 10:56:56.705881 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48f99840-4d9e-49c5-819e-0bb15493feb5" volumeName="kubernetes.io/secret/48f99840-4d9e-49c5-819e-0bb15493feb5-machine-api-operator-tls" seLinuxMountContext="" Mar 13 10:56:56.705992 master-0 kubenswrapper[33013]: I0313 10:56:56.705973 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b796628-a6ca-4d5c-9870-0ca60b9372aa" volumeName="kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 13 10:56:56.706077 master-0 kubenswrapper[33013]: I0313 10:56:56.706060 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8d40b37-0f3d-4531-9fa8-eda965d2337d" volumeName="kubernetes.io/projected/b8d40b37-0f3d-4531-9fa8-eda965d2337d-kube-api-access-l5rht" seLinuxMountContext="" Mar 13 10:56:56.706202 master-0 kubenswrapper[33013]: I0313 10:56:56.706182 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb778c86-ea51-4eab-82b8-a8e0bec0f050" volumeName="kubernetes.io/projected/eb778c86-ea51-4eab-82b8-a8e0bec0f050-kube-api-access-hkdfn" seLinuxMountContext="" Mar 13 10:56:56.706301 master-0 kubenswrapper[33013]: I0313 10:56:56.706283 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" volumeName="kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-trusted-ca-bundle" seLinuxMountContext="" Mar 13 10:56:56.706403 master-0 kubenswrapper[33013]: I0313 10:56:56.706382 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d72d950-cfb4-4ed5-9ad6-f7266b937493" volumeName="kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-config" seLinuxMountContext="" Mar 13 10:56:56.706494 master-0 kubenswrapper[33013]: I0313 10:56:56.706474 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a05e72d-836f-40e0-8a5c-ee02dce494b3" volumeName="kubernetes.io/projected/2a05e72d-836f-40e0-8a5c-ee02dce494b3-kube-api-access-qdb2x" seLinuxMountContext="" Mar 13 10:56:56.706622 master-0 kubenswrapper[33013]: I0313 10:56:56.706569 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa4b44d-f202-4670-afab-44b38960026f" volumeName="kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-cni-binary-copy" seLinuxMountContext="" Mar 13 10:56:56.706720 master-0 kubenswrapper[33013]: I0313 10:56:56.706702 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b68ed803-45e2-42f1-99b1-33cf59b01d74" volumeName="kubernetes.io/empty-dir/b68ed803-45e2-42f1-99b1-33cf59b01d74-audit-log" seLinuxMountContext="" Mar 13 10:56:56.706819 master-0 kubenswrapper[33013]: I0313 10:56:56.706802 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b" volumeName="kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-etcd-serving-ca" seLinuxMountContext="" Mar 13 10:56:56.706906 master-0 kubenswrapper[33013]: I0313 10:56:56.706887 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="549bd192-0235-4994-b485-f1b10d16f6b5" volumeName="kubernetes.io/projected/549bd192-0235-4994-b485-f1b10d16f6b5-kube-api-access-pwqp6" seLinuxMountContext="" Mar 13 10:56:56.706990 master-0 kubenswrapper[33013]: I0313 10:56:56.706974 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="574bf255-14b3-40af-b240-2d3abd5b86b8" volumeName="kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.707074 master-0 kubenswrapper[33013]: I0313 10:56:56.707058 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8" volumeName="kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-metrics-client-ca" seLinuxMountContext="" Mar 13 10:56:56.707158 master-0 kubenswrapper[33013]: I0313 10:56:56.707142 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c12a5d5-711f-4663-974c-c4b06e15fc39" volumeName="kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovnkube-config" seLinuxMountContext="" Mar 13 10:56:56.707251 master-0 kubenswrapper[33013]: I0313 10:56:56.707232 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a305f45-8689-45a8-8c8b-5954f2c863df" volumeName="kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.707340 master-0 kubenswrapper[33013]: I0313 10:56:56.707323 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42b4d53c-af72-44c8-9605-271445f95f87" volumeName="kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls" seLinuxMountContext="" Mar 13 10:56:56.707435 master-0 kubenswrapper[33013]: I0313 10:56:56.707415 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5448b59a-b731-45a3-9ded-d25315f597fb" volumeName="kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-tls" seLinuxMountContext="" Mar 13 10:56:56.707533 master-0 kubenswrapper[33013]: I0313 10:56:56.707516 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" volumeName="kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.707651 master-0 kubenswrapper[33013]: I0313 10:56:56.707630 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="484e6d0b-d057-4658-8e49-bbe7e6f6ee86" volumeName="kubernetes.io/secret/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 13 10:56:56.707755 master-0 kubenswrapper[33013]: I0313 10:56:56.707736 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9624a9a-68dd-4cc1-a0a4-23fe297ceba3" volumeName="kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-config" seLinuxMountContext="" Mar 13 10:56:56.707854 master-0 kubenswrapper[33013]: I0313 10:56:56.707836 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0f42a72-24c7-49e6-8edb-97b2b0d6183a" volumeName="kubernetes.io/configmap/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-images" seLinuxMountContext="" Mar 13 10:56:56.707947 master-0 kubenswrapper[33013]: I0313 10:56:56.707929 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f87662b9-6ac6-44f3-8a16-ff858c2baa91" volumeName="kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-ovnkube-identity-cm" seLinuxMountContext="" Mar 13 10:56:56.708042 master-0 kubenswrapper[33013]: I0313 10:56:56.708023 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="070b85a0-f076-4750-aa00-dabba401dc75" volumeName="kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cert" seLinuxMountContext="" Mar 13 10:56:56.708134 master-0 kubenswrapper[33013]: I0313 10:56:56.708115 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14f6e3b2-716c-4392-b3c8-75b2168ccfb7" volumeName="kubernetes.io/secret/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-webhook-certs" seLinuxMountContext="" Mar 13 10:56:56.708220 master-0 kubenswrapper[33013]: I0313 10:56:56.708202 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="257a4a8b-014c-4473-80a0-e95cf6d41bf1" volumeName="kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs" seLinuxMountContext="" Mar 13 10:56:56.708319 master-0 kubenswrapper[33013]: I0313 10:56:56.708301 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" volumeName="kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca" seLinuxMountContext="" Mar 13 10:56:56.708417 master-0 kubenswrapper[33013]: I0313 10:56:56.708398 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a1b43c4-55b9-4c72-ba7c-9089bf28cf16" volumeName="kubernetes.io/empty-dir/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-utilities" seLinuxMountContext="" Mar 13 10:56:56.708502 master-0 kubenswrapper[33013]: I0313 10:56:56.708485 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="549bd192-0235-4994-b485-f1b10d16f6b5" volumeName="kubernetes.io/configmap/549bd192-0235-4994-b485-f1b10d16f6b5-signing-cabundle" seLinuxMountContext="" Mar 13 10:56:56.708611 master-0 kubenswrapper[33013]: I0313 10:56:56.708576 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="574bf255-14b3-40af-b240-2d3abd5b86b8" volumeName="kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-client" seLinuxMountContext="" Mar 13 10:56:56.708712 master-0 kubenswrapper[33013]: I0313 10:56:56.708694 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ed47c57-533f-43e4-88eb-07da29b4878f" volumeName="kubernetes.io/empty-dir/6ed47c57-533f-43e4-88eb-07da29b4878f-available-featuregates" seLinuxMountContext="" Mar 13 10:56:56.708814 master-0 kubenswrapper[33013]: I0313 10:56:56.708792 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7667717b-fb74-456b-8615-16475cb69e98" volumeName="kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-bound-sa-token" seLinuxMountContext="" Mar 13 10:56:56.708906 master-0 kubenswrapper[33013]: I0313 10:56:56.708889 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c09f42db-e6d7-469d-9761-88a879f6aa6b" volumeName="kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca" seLinuxMountContext="" Mar 13 10:56:56.708999 master-0 kubenswrapper[33013]: I0313 10:56:56.708982 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d72d950-cfb4-4ed5-9ad6-f7266b937493" volumeName="kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-trusted-ca-bundle" seLinuxMountContext="" Mar 13 10:56:56.709087 master-0 kubenswrapper[33013]: I0313 10:56:56.709069 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="257a4a8b-014c-4473-80a0-e95cf6d41bf1" volumeName="kubernetes.io/projected/257a4a8b-014c-4473-80a0-e95cf6d41bf1-ca-certs" seLinuxMountContext="" Mar 13 10:56:56.709239 master-0 kubenswrapper[33013]: I0313 10:56:56.709157 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48f99840-4d9e-49c5-819e-0bb15493feb5" volumeName="kubernetes.io/projected/48f99840-4d9e-49c5-819e-0bb15493feb5-kube-api-access-mb5l4" seLinuxMountContext="" Mar 13 10:56:56.709300 master-0 kubenswrapper[33013]: I0313 10:56:56.709276 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c455a959-d764-4b4f-a1e0-95c27495dd9d" volumeName="kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert" seLinuxMountContext="" Mar 13 10:56:56.709357 master-0 kubenswrapper[33013]: I0313 10:56:56.709307 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9075a44-22d3-4562-819e-d5a92f013663" volumeName="kubernetes.io/empty-dir/d9075a44-22d3-4562-819e-d5a92f013663-etc-tuned" seLinuxMountContext="" Mar 13 10:56:56.709357 master-0 kubenswrapper[33013]: I0313 10:56:56.709326 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec3168fc-6c8f-4603-94e0-17b1ae22a802" volumeName="kubernetes.io/secret/ec3168fc-6c8f-4603-94e0-17b1ae22a802-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.709357 master-0 kubenswrapper[33013]: I0313 10:56:56.709337 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="05a72a4c-5ce8-49d1-8e4f-334f63d4e987" volumeName="kubernetes.io/projected/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-kube-api-access-btws6" seLinuxMountContext="" Mar 13 10:56:56.709357 master-0 kubenswrapper[33013]: I0313 10:56:56.709349 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="257a4a8b-014c-4473-80a0-e95cf6d41bf1" volumeName="kubernetes.io/empty-dir/257a4a8b-014c-4473-80a0-e95cf6d41bf1-cache" seLinuxMountContext="" Mar 13 10:56:56.709357 master-0 kubenswrapper[33013]: I0313 10:56:56.709361 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86ae8cb8-72b3-4be6-9feb-ee0c0da42dba" volumeName="kubernetes.io/secret/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.709574 master-0 kubenswrapper[33013]: I0313 10:56:56.709375 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b" volumeName="kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-encryption-config" seLinuxMountContext="" Mar 13 10:56:56.709574 master-0 kubenswrapper[33013]: I0313 10:56:56.709393 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9624a9a-68dd-4cc1-a0a4-23fe297ceba3" volumeName="kubernetes.io/secret/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovn-node-metrics-cert" seLinuxMountContext="" Mar 13 10:56:56.709574 master-0 kubenswrapper[33013]: I0313 10:56:56.709403 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9075a44-22d3-4562-819e-d5a92f013663" volumeName="kubernetes.io/projected/d9075a44-22d3-4562-819e-d5a92f013663-kube-api-access-htqw9" seLinuxMountContext="" Mar 13 10:56:56.709574 master-0 kubenswrapper[33013]: I0313 10:56:56.709418 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec3168fc-6c8f-4603-94e0-17b1ae22a802" volumeName="kubernetes.io/configmap/ec3168fc-6c8f-4603-94e0-17b1ae22a802-config" seLinuxMountContext="" Mar 13 10:56:56.709574 master-0 kubenswrapper[33013]: I0313 10:56:56.709429 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1a998af-4fc0-4078-a6a0-93dde6c00508" volumeName="kubernetes.io/projected/a1a998af-4fc0-4078-a6a0-93dde6c00508-kube-api-access-p29zg" seLinuxMountContext="" Mar 13 10:56:56.709574 master-0 kubenswrapper[33013]: I0313 10:56:56.709352 33013 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 13 10:56:56.709574 master-0 kubenswrapper[33013]: I0313 10:56:56.709441 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b10584c2-ef04-4649-bcb6-9222c9530c3f" volumeName="kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-ca-certs" seLinuxMountContext="" Mar 13 10:56:56.709574 master-0 kubenswrapper[33013]: I0313 10:56:56.709509 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb778c86-ea51-4eab-82b8-a8e0bec0f050" volumeName="kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-default-certificate" seLinuxMountContext="" Mar 13 10:56:56.709574 master-0 kubenswrapper[33013]: I0313 10:56:56.709525 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ac1a605-d2d5-4004-96f5-121c20555bde" volumeName="kubernetes.io/projected/0ac1a605-d2d5-4004-96f5-121c20555bde-kube-api-access" seLinuxMountContext="" Mar 13 10:56:56.709574 master-0 kubenswrapper[33013]: I0313 10:56:56.709541 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c09f42db-e6d7-469d-9761-88a879f6aa6b" volumeName="kubernetes.io/projected/c09f42db-e6d7-469d-9761-88a879f6aa6b-kube-api-access-mcb99" seLinuxMountContext="" Mar 13 10:56:56.709574 master-0 kubenswrapper[33013]: I0313 10:56:56.709551 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d72d950-cfb4-4ed5-9ad6-f7266b937493" volumeName="kubernetes.io/projected/1d72d950-cfb4-4ed5-9ad6-f7266b937493-kube-api-access-h9cbm" seLinuxMountContext="" Mar 13 10:56:56.709574 master-0 kubenswrapper[33013]: I0313 10:56:56.709564 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60e17cd1-c520-4d8d-8c72-47bf73b8cc66" volumeName="kubernetes.io/projected/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-kube-api-access-xg9zz" seLinuxMountContext="" Mar 13 10:56:56.709574 master-0 kubenswrapper[33013]: I0313 10:56:56.709575 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b68ed803-45e2-42f1-99b1-33cf59b01d74" volumeName="kubernetes.io/projected/b68ed803-45e2-42f1-99b1-33cf59b01d74-kube-api-access-q5hq9" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709633 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939a3da3-62e7-4376-853d-dc333465446c" volumeName="kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-telemeter-client-tls" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709651 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9da11462-a91d-4d02-8614-78b4c5b2f7e2" volumeName="kubernetes.io/secret/9da11462-a91d-4d02-8614-78b4c5b2f7e2-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709672 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b68ed803-45e2-42f1-99b1-33cf59b01d74" volumeName="kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709687 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e6ecc16-19cb-4b66-801f-b958b10d0ce7" volumeName="kubernetes.io/configmap/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cco-trusted-ca" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709706 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86774fd7-7c26-4b41-badb-de1004397637" volumeName="kubernetes.io/secret/86774fd7-7c26-4b41-badb-de1004397637-samples-operator-tls" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709722 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cf9326b-bc23-45c2-82c4-9c08c739ac5a" volumeName="kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709740 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="26cc0e72-8b4f-4087-89b9-05d2cf6df3f6" volumeName="kubernetes.io/projected/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-kube-api-access-m7v6s" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709755 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8" volumeName="kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709769 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d8af021-f20f-48a2-8b2a-3a5a3f37237f" volumeName="kubernetes.io/configmap/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-metrics-client-ca" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709782 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42b4d53c-af72-44c8-9605-271445f95f87" volumeName="kubernetes.io/projected/42b4d53c-af72-44c8-9605-271445f95f87-kube-api-access-kjcjm" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709795 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ed5e77b-948b-4d94-ac9f-440ee3c07e18" volumeName="kubernetes.io/projected/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-kube-api-access-22bwx" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709806 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="79bb87a4-8834-4c73-834e-356ccc1f7f9b" volumeName="kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709820 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa507cf-017d-44f5-8662-77547f82fb51" volumeName="kubernetes.io/empty-dir/5aa507cf-017d-44f5-8662-77547f82fb51-catalog-content" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709856 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939a3da3-62e7-4376-853d-dc333465446c" volumeName="kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-serving-certs-ca-bundle" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709873 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa4b44d-f202-4670-afab-44b38960026f" volumeName="kubernetes.io/projected/9aa4b44d-f202-4670-afab-44b38960026f-kube-api-access-bjvtr" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709889 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bfbaa57e-adac-48f8-8182-b4fdb42fbb9c" volumeName="kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-auth-proxy-config" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709902 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c12a5d5-711f-4663-974c-c4b06e15fc39" volumeName="kubernetes.io/projected/1c12a5d5-711f-4663-974c-c4b06e15fc39-kube-api-access-cg69z" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709916 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939a3da3-62e7-4376-853d-dc333465446c" volumeName="kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-federate-client-tls" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709927 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939a3da3-62e7-4376-853d-dc333465446c" volumeName="kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709939 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37b2e803-302b-4650-b18f-d3d2dd703bd5" volumeName="kubernetes.io/secret/37b2e803-302b-4650-b18f-d3d2dd703bd5-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709951 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e485e709-32ba-442b-98e5-b4073516c0ab" volumeName="kubernetes.io/projected/e485e709-32ba-442b-98e5-b4073516c0ab-kube-api-access-qwc4l" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709964 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bfbaa57e-adac-48f8-8182-b4fdb42fbb9c" volumeName="kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-images" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709976 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86774fd7-7c26-4b41-badb-de1004397637" volumeName="kubernetes.io/projected/86774fd7-7c26-4b41-badb-de1004397637-kube-api-access-tfxm5" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.709988 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9db15a-8854-485b-9863-9cbe5dddd977" volumeName="kubernetes.io/secret/8f9db15a-8854-485b-9863-9cbe5dddd977-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710000 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d8af021-f20f-48a2-8b2a-3a5a3f37237f" volumeName="kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-tls" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710013 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="574bf255-14b3-40af-b240-2d3abd5b86b8" volumeName="kubernetes.io/projected/574bf255-14b3-40af-b240-2d3abd5b86b8-kube-api-access-grplv" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710026 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d8af021-f20f-48a2-8b2a-3a5a3f37237f" volumeName="kubernetes.io/projected/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-kube-api-access-dzxzs" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710039 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec121f87-93ea-468c-a25f-2ec5e7d0e0ee" volumeName="kubernetes.io/secret/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-machine-approver-tls" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710049 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ac1a605-d2d5-4004-96f5-121c20555bde" volumeName="kubernetes.io/secret/0ac1a605-d2d5-4004-96f5-121c20555bde-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710061 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a05e72d-836f-40e0-8a5c-ee02dce494b3" volumeName="kubernetes.io/empty-dir/2a05e72d-836f-40e0-8a5c-ee02dce494b3-catalog-content" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710072 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2afe3890-e844-4dd3-ba49-3ac9178549bf" volumeName="kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710084 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="66f49a19-0e3b-4611-b8a6-5f5687fa20b6" volumeName="kubernetes.io/projected/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-kube-api-access-knkb7" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710115 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7667717b-fb74-456b-8615-16475cb69e98" volumeName="kubernetes.io/configmap/7667717b-fb74-456b-8615-16475cb69e98-trusted-ca" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710141 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b68ed803-45e2-42f1-99b1-33cf59b01d74" volumeName="kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-metrics-server-audit-profiles" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710160 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="05a72a4c-5ce8-49d1-8e4f-334f63d4e987" volumeName="kubernetes.io/secret/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-cert" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710175 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" volumeName="kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-service-ca-bundle" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710191 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" volumeName="kubernetes.io/projected/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-kube-api-access-qqjkf" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710208 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b" volumeName="kubernetes.io/projected/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-kube-api-access-tdgld" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710222 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4d5479f3-51ec-4b93-8188-21cdda44828d" volumeName="kubernetes.io/projected/4d5479f3-51ec-4b93-8188-21cdda44828d-kube-api-access-j6xlb" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710238 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5448b59a-b731-45a3-9ded-d25315f597fb" volumeName="kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710254 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="574bf255-14b3-40af-b240-2d3abd5b86b8" volumeName="kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-ca" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710270 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cf9326b-bc23-45c2-82c4-9c08c739ac5a" volumeName="kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-bound-sa-token" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710282 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="070b85a0-f076-4750-aa00-dabba401dc75" volumeName="kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710296 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="11927952-723f-4d6d-922b-73139abe8877" volumeName="kubernetes.io/secret/11927952-723f-4d6d-922b-73139abe8877-metrics-tls" seLinuxMountContext="" Mar 13 10:56:56.710222 master-0 kubenswrapper[33013]: I0313 10:56:56.710309 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="257a4a8b-014c-4473-80a0-e95cf6d41bf1" volumeName="kubernetes.io/projected/257a4a8b-014c-4473-80a0-e95cf6d41bf1-kube-api-access-hzv5v" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710328 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" volumeName="kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710342 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8d40b37-0f3d-4531-9fa8-eda965d2337d" volumeName="kubernetes.io/secret/b8d40b37-0f3d-4531-9fa8-eda965d2337d-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710355 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c87545aa-11c2-4e6e-8c13-16eeff3be83b" volumeName="kubernetes.io/projected/c87545aa-11c2-4e6e-8c13-16eeff3be83b-kube-api-access-pwfzq" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710367 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6622be09-206e-4d02-90ca-6d9f2fc852aa" volumeName="kubernetes.io/projected/6622be09-206e-4d02-90ca-6d9f2fc852aa-kube-api-access-lqg6g" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710378 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="beee81ef-5a3a-4df2-85d5-2573679d261f" volumeName="kubernetes.io/empty-dir/beee81ef-5a3a-4df2-85d5-2573679d261f-catalog-content" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710391 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c09f42db-e6d7-469d-9761-88a879f6aa6b" volumeName="kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710403 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b68ed803-45e2-42f1-99b1-33cf59b01d74" volumeName="kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710417 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f87662b9-6ac6-44f3-8a16-ff858c2baa91" volumeName="kubernetes.io/projected/f87662b9-6ac6-44f3-8a16-ff858c2baa91-kube-api-access-zk4sg" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710428 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="070b85a0-f076-4750-aa00-dabba401dc75" volumeName="kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-config" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710440 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="11927952-723f-4d6d-922b-73139abe8877" volumeName="kubernetes.io/configmap/11927952-723f-4d6d-922b-73139abe8877-config-volume" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710451 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d72d950-cfb4-4ed5-9ad6-f7266b937493" volumeName="kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710464 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa4b44d-f202-4670-afab-44b38960026f" volumeName="kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-multus-daemon-config" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710475 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb778c86-ea51-4eab-82b8-a8e0bec0f050" volumeName="kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-stats-auth" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710489 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" volumeName="kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-config" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710500 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d72d950-cfb4-4ed5-9ad6-f7266b937493" volumeName="kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-encryption-config" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710513 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="282bc9ff-1bc0-421b-9cd3-d88d7c5e5303" volumeName="kubernetes.io/configmap/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-config" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710526 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a1b43c4-55b9-4c72-ba7c-9089bf28cf16" volumeName="kubernetes.io/projected/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-kube-api-access-rvvhh" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710540 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803de28e-3b31-4ea2-9b97-87a733635a5c" volumeName="kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710552 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bfbaa57e-adac-48f8-8182-b4fdb42fbb9c" volumeName="kubernetes.io/projected/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-kube-api-access-j25nl" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710565 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="beee81ef-5a3a-4df2-85d5-2573679d261f" volumeName="kubernetes.io/empty-dir/beee81ef-5a3a-4df2-85d5-2573679d261f-utilities" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710578 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="beee81ef-5a3a-4df2-85d5-2573679d261f" volumeName="kubernetes.io/projected/beee81ef-5a3a-4df2-85d5-2573679d261f-kube-api-access-f8q5s" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710614 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d5f5440-b10c-40ea-9f1a-5f03babc1bd9" volumeName="kubernetes.io/secret/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-metrics-tls" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710628 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4d5479f3-51ec-4b93-8188-21cdda44828d" volumeName="kubernetes.io/configmap/4d5479f3-51ec-4b93-8188-21cdda44828d-telemetry-config" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710641 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86ae8cb8-72b3-4be6-9feb-ee0c0da42dba" volumeName="kubernetes.io/projected/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-kube-api-access" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710653 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d5f5440-b10c-40ea-9f1a-5f03babc1bd9" volumeName="kubernetes.io/projected/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-kube-api-access-8rfpp" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710666 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d72d950-cfb4-4ed5-9ad6-f7266b937493" volumeName="kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-image-import-ca" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710680 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5448b59a-b731-45a3-9ded-d25315f597fb" volumeName="kubernetes.io/configmap/5448b59a-b731-45a3-9ded-d25315f597fb-metrics-client-ca" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710693 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b796628-a6ca-4d5c-9870-0ca60b9372aa" volumeName="kubernetes.io/configmap/5b796628-a6ca-4d5c-9870-0ca60b9372aa-metrics-client-ca" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710706 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9624a9a-68dd-4cc1-a0a4-23fe297ceba3" volumeName="kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-script-lib" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710721 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0f42a72-24c7-49e6-8edb-97b2b0d6183a" volumeName="kubernetes.io/configmap/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-auth-proxy-config" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710734 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b796628-a6ca-4d5c-9870-0ca60b9372aa" volumeName="kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-tls" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710746 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c12a5d5-711f-4663-974c-c4b06e15fc39" volumeName="kubernetes.io/secret/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710764 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b" volumeName="kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710780 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b796628-a6ca-4d5c-9870-0ca60b9372aa" volumeName="kubernetes.io/empty-dir/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-textfile" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710796 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d72d950-cfb4-4ed5-9ad6-f7266b937493" volumeName="kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-etcd-serving-ca" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710812 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a1b43c4-55b9-4c72-ba7c-9089bf28cf16" volumeName="kubernetes.io/empty-dir/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-catalog-content" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710826 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939a3da3-62e7-4376-853d-dc333465446c" volumeName="kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710841 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="866cf034-8fd8-4f16-8e9b-68627228aa8d" volumeName="kubernetes.io/projected/866cf034-8fd8-4f16-8e9b-68627228aa8d-kube-api-access-mnrlx" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710855 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cf9326b-bc23-45c2-82c4-9c08c739ac5a" volumeName="kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-kube-api-access-m5vcv" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710872 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b68ed803-45e2-42f1-99b1-33cf59b01d74" volumeName="kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710884 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5843b0d4-a538-4261-b425-598e318c9d07" volumeName="kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-whereabouts-configmap" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710894 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60e17cd1-c520-4d8d-8c72-47bf73b8cc66" volumeName="kubernetes.io/configmap/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-mcd-auth-proxy-config" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710905 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7667717b-fb74-456b-8615-16475cb69e98" volumeName="kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710917 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8" volumeName="kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-tls" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710929 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5843b0d4-a538-4261-b425-598e318c9d07" volumeName="kubernetes.io/projected/5843b0d4-a538-4261-b425-598e318c9d07-kube-api-access-r6nnz" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710941 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ac1a605-d2d5-4004-96f5-121c20555bde" volumeName="kubernetes.io/configmap/0ac1a605-d2d5-4004-96f5-121c20555bde-service-ca" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710953 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="26cc0e72-8b4f-4087-89b9-05d2cf6df3f6" volumeName="kubernetes.io/configmap/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-mcc-auth-proxy-config" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710965 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2afe3890-e844-4dd3-ba49-3ac9178549bf" volumeName="kubernetes.io/projected/2afe3890-e844-4dd3-ba49-3ac9178549bf-kube-api-access-d84xk" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710977 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cf9326b-bc23-45c2-82c4-9c08c739ac5a" volumeName="kubernetes.io/configmap/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-trusted-ca" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.710988 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="070b85a0-f076-4750-aa00-dabba401dc75" volumeName="kubernetes.io/projected/070b85a0-f076-4750-aa00-dabba401dc75-kube-api-access-nlmhn" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711005 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="484e6d0b-d057-4658-8e49-bbe7e6f6ee86" volumeName="kubernetes.io/projected/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-kube-api-access-qbdwm" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711020 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b796628-a6ca-4d5c-9870-0ca60b9372aa" volumeName="kubernetes.io/projected/5b796628-a6ca-4d5c-9870-0ca60b9372aa-kube-api-access-48nns" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711037 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa507cf-017d-44f5-8662-77547f82fb51" volumeName="kubernetes.io/projected/5aa507cf-017d-44f5-8662-77547f82fb51-kube-api-access-jt6sd" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711050 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="26cc0e72-8b4f-4087-89b9-05d2cf6df3f6" volumeName="kubernetes.io/secret/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-proxy-tls" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711061 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ff2ab1c-7057-4e18-8e32-68807f86532a" volumeName="kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711073 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5843b0d4-a538-4261-b425-598e318c9d07" volumeName="kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-binary-copy" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711084 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bfbaa57e-adac-48f8-8182-b4fdb42fbb9c" volumeName="kubernetes.io/secret/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711096 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d72d950-cfb4-4ed5-9ad6-f7266b937493" volumeName="kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-audit" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711109 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa507cf-017d-44f5-8662-77547f82fb51" volumeName="kubernetes.io/empty-dir/5aa507cf-017d-44f5-8662-77547f82fb51-utilities" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711108 33013 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711120 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ed5e77b-948b-4d94-ac9f-440ee3c07e18" volumeName="kubernetes.io/secret/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711168 33013 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711237 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86ae8cb8-72b3-4be6-9feb-ee0c0da42dba" volumeName="kubernetes.io/configmap/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-config" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711313 33013 kubelet.go:2335] "Starting kubelet main sync loop" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: E0313 10:56:56.711375 33013 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711311 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939a3da3-62e7-4376-853d-dc333465446c" volumeName="kubernetes.io/projected/939a3da3-62e7-4376-853d-dc333465446c-kube-api-access-t2q2f" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711408 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c87545aa-11c2-4e6e-8c13-16eeff3be83b" volumeName="kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-trusted-ca-bundle" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711424 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c87545aa-11c2-4e6e-8c13-16eeff3be83b" volumeName="kubernetes.io/empty-dir/c87545aa-11c2-4e6e-8c13-16eeff3be83b-snapshots" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711441 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4d5479f3-51ec-4b93-8188-21cdda44828d" volumeName="kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711457 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e6ecc16-19cb-4b66-801f-b958b10d0ce7" volumeName="kubernetes.io/projected/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-kube-api-access-gn8w5" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711477 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="66f49a19-0e3b-4611-b8a6-5f5687fa20b6" volumeName="kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711499 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939a3da3-62e7-4376-853d-dc333465446c" volumeName="kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-metrics-client-ca" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711515 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b68ed803-45e2-42f1-99b1-33cf59b01d74" volumeName="kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-client-certs" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711533 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9624a9a-68dd-4cc1-a0a4-23fe297ceba3" volumeName="kubernetes.io/projected/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-kube-api-access-vxvqn" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711716 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c455a959-d764-4b4f-a1e0-95c27495dd9d" volumeName="kubernetes.io/projected/c455a959-d764-4b4f-a1e0-95c27495dd9d-kube-api-access-2cpdn" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711737 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="282bc9ff-1bc0-421b-9cd3-d88d7c5e5303" volumeName="kubernetes.io/projected/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-kube-api-access-lpdlr" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711761 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8" volumeName="kubernetes.io/empty-dir/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-volume-directive-shadow" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711785 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37b2e803-302b-4650-b18f-d3d2dd703bd5" volumeName="kubernetes.io/projected/37b2e803-302b-4650-b18f-d3d2dd703bd5-kube-api-access-hp2qn" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711805 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c12a5d5-711f-4663-974c-c4b06e15fc39" volumeName="kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-env-overrides" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711824 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b" volumeName="kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-etcd-client" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711838 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec121f87-93ea-468c-a25f-2ec5e7d0e0ee" volumeName="kubernetes.io/projected/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-kube-api-access-s2znn" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711853 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c09f42db-e6d7-469d-9761-88a879f6aa6b" volumeName="kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.712161 master-0 kubenswrapper[33013]: I0313 10:56:56.711870 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ed5e77b-948b-4d94-ac9f-440ee3c07e18" volumeName="kubernetes.io/configmap/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-config" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.714446 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b10584c2-ef04-4649-bcb6-9222c9530c3f" volumeName="kubernetes.io/empty-dir/b10584c2-ef04-4649-bcb6-9222c9530c3f-cache" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.714617 33013 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.714549 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b12e76f4-b960-4534-90e6-a2cdbecd1728" volumeName="kubernetes.io/configmap/b12e76f4-b960-4534-90e6-a2cdbecd1728-iptables-alerter-script" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.714878 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="549bd192-0235-4994-b485-f1b10d16f6b5" volumeName="kubernetes.io/secret/549bd192-0235-4994-b485-f1b10d16f6b5-signing-key" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.714903 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0f42a72-24c7-49e6-8edb-97b2b0d6183a" volumeName="kubernetes.io/projected/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-kube-api-access-26dtr" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.714925 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14f6e3b2-716c-4392-b3c8-75b2168ccfb7" volumeName="kubernetes.io/projected/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-kube-api-access-gh6kl" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.714966 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33" volumeName="kubernetes.io/configmap/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-auth-proxy-config" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.714984 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1a998af-4fc0-4078-a6a0-93dde6c00508" volumeName="kubernetes.io/secret/a1a998af-4fc0-4078-a6a0-93dde6c00508-serving-cert" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.714998 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b12e76f4-b960-4534-90e6-a2cdbecd1728" volumeName="kubernetes.io/projected/b12e76f4-b960-4534-90e6-a2cdbecd1728-kube-api-access-xq9dl" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715015 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d288e5d0-0976-477f-be14-b3d5828e0482" volumeName="kubernetes.io/projected/d288e5d0-0976-477f-be14-b3d5828e0482-kube-api-access-5k8rp" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715072 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1edde4bf-4554-4ab2-b588-513ad84a9bae" volumeName="kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-webhook-cert" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715127 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48f99840-4d9e-49c5-819e-0bb15493feb5" volumeName="kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-config" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715145 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="79bb87a4-8834-4c73-834e-356ccc1f7f9b" volumeName="kubernetes.io/projected/79bb87a4-8834-4c73-834e-356ccc1f7f9b-kube-api-access-56qz6" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715157 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ff2ab1c-7057-4e18-8e32-68807f86532a" volumeName="kubernetes.io/projected/3ff2ab1c-7057-4e18-8e32-68807f86532a-kube-api-access-8c4rc" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715171 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d8af021-f20f-48a2-8b2a-3a5a3f37237f" volumeName="kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715206 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb778c86-ea51-4eab-82b8-a8e0bec0f050" volumeName="kubernetes.io/configmap/eb778c86-ea51-4eab-82b8-a8e0bec0f050-service-ca-bundle" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715224 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="574bf255-14b3-40af-b240-2d3abd5b86b8" volumeName="kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-service-ca" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715239 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="66f49a19-0e3b-4611-b8a6-5f5687fa20b6" volumeName="kubernetes.io/configmap/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-trusted-ca" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715337 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7667717b-fb74-456b-8615-16475cb69e98" volumeName="kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-kube-api-access-qd2mn" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715397 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="11927952-723f-4d6d-922b-73139abe8877" volumeName="kubernetes.io/projected/11927952-723f-4d6d-922b-73139abe8877-kube-api-access-kgb25" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715413 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1edde4bf-4554-4ab2-b588-513ad84a9bae" volumeName="kubernetes.io/empty-dir/1edde4bf-4554-4ab2-b588-513ad84a9bae-tmpfs" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715450 33013 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b" volumeName="kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-trusted-ca-bundle" seLinuxMountContext="" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715466 33013 reconstruct.go:97] "Volume reconstruction finished" Mar 13 10:56:56.716919 master-0 kubenswrapper[33013]: I0313 10:56:56.715477 33013 reconciler.go:26] "Reconciler: start to sync state" Mar 13 10:56:56.719205 master-0 kubenswrapper[33013]: I0313 10:56:56.719155 33013 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 13 10:56:56.723145 master-0 kubenswrapper[33013]: I0313 10:56:56.723059 33013 generic.go:334] "Generic (PLEG): container finished" podID="8f9db15a-8854-485b-9863-9cbe5dddd977" containerID="3d7f37aa994251928291249049a2be620c22f26b28c64911444e794ad1a679e5" exitCode=0 Mar 13 10:56:56.730186 master-0 kubenswrapper[33013]: I0313 10:56:56.730108 33013 generic.go:334] "Generic (PLEG): container finished" podID="b8337424-8677-401d-8c68-b58c7d9ab99a" containerID="1bea0672139d7f4dff089e018c1c16d0afb0f3f466924f1394e930cdfd82c0f0" exitCode=0 Mar 13 10:56:56.732448 master-0 kubenswrapper[33013]: I0313 10:56:56.732404 33013 generic.go:334] "Generic (PLEG): container finished" podID="05f7830b-51cc-45d2-bbb3-ac01eeed57ac" containerID="e30ddedef616e76982e7503ccdc6b701bfe5c6467184889999283ee9de5f7a92" exitCode=0 Mar 13 10:56:56.735209 master-0 kubenswrapper[33013]: I0313 10:56:56.734434 33013 generic.go:334] "Generic (PLEG): container finished" podID="d8bdd05f-f920-4441-969f-336c85d2da57" containerID="c54439de52c783224aa04045b8c8a51003280811e42de25b97607e84d8c7daa8" exitCode=0 Mar 13 10:56:56.738237 master-0 kubenswrapper[33013]: I0313 10:56:56.737171 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-7h8nz_48f99840-4d9e-49c5-819e-0bb15493feb5/machine-api-operator/0.log" Mar 13 10:56:56.738237 master-0 kubenswrapper[33013]: I0313 10:56:56.738172 33013 generic.go:334] "Generic (PLEG): container finished" podID="48f99840-4d9e-49c5-819e-0bb15493feb5" containerID="3db54e90276a64402967c0bc59c00901e01327339bb78dd658883ac9c02f925f" exitCode=255 Mar 13 10:56:56.740289 master-0 kubenswrapper[33013]: I0313 10:56:56.740260 33013 generic.go:334] "Generic (PLEG): container finished" podID="8cf9326b-bc23-45c2-82c4-9c08c739ac5a" containerID="43230423fe1ad4b520548b08f0898f9f7d5cb849ac1cf6fadabab03cda0d4f3c" exitCode=0 Mar 13 10:56:56.750767 master-0 kubenswrapper[33013]: I0313 10:56:56.750716 33013 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="b9afa0d6c9ded08257918288601275e200a1f5d816485290920a81d0a9149405" exitCode=0 Mar 13 10:56:56.750767 master-0 kubenswrapper[33013]: I0313 10:56:56.750752 33013 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="a71a5f7050d9b50b349f60da266053c0daef17268d0a768624b3f4f70f7f01a0" exitCode=0 Mar 13 10:56:56.750767 master-0 kubenswrapper[33013]: I0313 10:56:56.750764 33013 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="5aea8eda95c6cad12da786a1a1cc2a69af0868d380d904ea93a9398f7754ee5b" exitCode=0 Mar 13 10:56:56.750767 master-0 kubenswrapper[33013]: I0313 10:56:56.750775 33013 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="c52caffe2e52c9e9297b6c1f2ec3f7f6e6e6506eb77ca1a1569946e8d355217d" exitCode=0 Mar 13 10:56:56.750767 master-0 kubenswrapper[33013]: I0313 10:56:56.750787 33013 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="f1eb6056de76c4d6a8863b61770ab5ed8e00f850c41514ac1273f8663adc746a" exitCode=0 Mar 13 10:56:56.751303 master-0 kubenswrapper[33013]: I0313 10:56:56.750798 33013 generic.go:334] "Generic (PLEG): container finished" podID="5843b0d4-a538-4261-b425-598e318c9d07" containerID="1a1885581af587b9ba505d0bc5381467495165cc081fe48fe67060864afa4c7a" exitCode=0 Mar 13 10:56:56.754187 master-0 kubenswrapper[33013]: I0313 10:56:56.754136 33013 generic.go:334] "Generic (PLEG): container finished" podID="5aa507cf-017d-44f5-8662-77547f82fb51" containerID="ac19f75968e7d0eae52d08a547ded61c84c9448d5897a33d898474c90867405f" exitCode=0 Mar 13 10:56:56.754187 master-0 kubenswrapper[33013]: I0313 10:56:56.754172 33013 generic.go:334] "Generic (PLEG): container finished" podID="5aa507cf-017d-44f5-8662-77547f82fb51" containerID="a18c04cbfe5a7abf5768c58054cd016d672f1f9f4ba2bd72d74624ba275dea07" exitCode=0 Mar 13 10:56:56.756409 master-0 kubenswrapper[33013]: I0313 10:56:56.756360 33013 generic.go:334] "Generic (PLEG): container finished" podID="eb778c86-ea51-4eab-82b8-a8e0bec0f050" containerID="05d3fbc3bee5182c5f9073fc6b00e89e15ad18832a082318ea3763b5cb1e923e" exitCode=0 Mar 13 10:56:56.759110 master-0 kubenswrapper[33013]: I0313 10:56:56.759080 33013 generic.go:334] "Generic (PLEG): container finished" podID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerID="080bec4d72d5bc2a5ff39e071b40e2b30bc6c479f34acbf3881af3489f75aaae" exitCode=0 Mar 13 10:56:56.759110 master-0 kubenswrapper[33013]: I0313 10:56:56.759101 33013 generic.go:334] "Generic (PLEG): container finished" podID="6ed47c57-533f-43e4-88eb-07da29b4878f" containerID="5948c776742a66ca9c8dc4ab4653ab39ea0f5fc6e05a6a107b0cddf0d69c875e" exitCode=0 Mar 13 10:56:56.761860 master-0 kubenswrapper[33013]: I0313 10:56:56.761790 33013 generic.go:334] "Generic (PLEG): container finished" podID="00e8e251-40d9-458a-92a7-9b2e91dc7359" containerID="ff391d9c59813842d72b9912aea0684a5fa08ec853cdfa9eb1e377087c9747df" exitCode=0 Mar 13 10:56:56.765522 master-0 kubenswrapper[33013]: I0313 10:56:56.765450 33013 generic.go:334] "Generic (PLEG): container finished" podID="2a05e72d-836f-40e0-8a5c-ee02dce494b3" containerID="490778339a50279f0baab46d399265e6afeef4d74597e2f61bb4cc2c5373d122" exitCode=0 Mar 13 10:56:56.765522 master-0 kubenswrapper[33013]: I0313 10:56:56.765494 33013 generic.go:334] "Generic (PLEG): container finished" podID="2a05e72d-836f-40e0-8a5c-ee02dce494b3" containerID="709ea323c21fae26ff2a6680d0329165925afd7a1343d424221a5d0bd6de0958" exitCode=0 Mar 13 10:56:56.781781 master-0 kubenswrapper[33013]: I0313 10:56:56.781689 33013 generic.go:334] "Generic (PLEG): container finished" podID="ec3168fc-6c8f-4603-94e0-17b1ae22a802" containerID="294850f202234f4a9d138e028654f94bb9813203f7edf3397d10697e7a4b46a2" exitCode=0 Mar 13 10:56:56.800114 master-0 kubenswrapper[33013]: I0313 10:56:56.800033 33013 generic.go:334] "Generic (PLEG): container finished" podID="866cf034-8fd8-4f16-8e9b-68627228aa8d" containerID="838b4cfccf523638ccd0bf31bf9b16492b12c33b0f070423ea23f66b9d72c78e" exitCode=0 Mar 13 10:56:56.808144 master-0 kubenswrapper[33013]: I0313 10:56:56.808082 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x_bfbaa57e-adac-48f8-8182-b4fdb42fbb9c/config-sync-controllers/0.log" Mar 13 10:56:56.808669 master-0 kubenswrapper[33013]: I0313 10:56:56.808632 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x_bfbaa57e-adac-48f8-8182-b4fdb42fbb9c/cluster-cloud-controller-manager/0.log" Mar 13 10:56:56.808748 master-0 kubenswrapper[33013]: I0313 10:56:56.808690 33013 generic.go:334] "Generic (PLEG): container finished" podID="bfbaa57e-adac-48f8-8182-b4fdb42fbb9c" containerID="9c62b3c2fdc62403c70efa03c341af1e11c584005c0854a7b9ae04a0957b3988" exitCode=1 Mar 13 10:56:56.808748 master-0 kubenswrapper[33013]: I0313 10:56:56.808719 33013 generic.go:334] "Generic (PLEG): container finished" podID="bfbaa57e-adac-48f8-8182-b4fdb42fbb9c" containerID="f5630038dc1bb4e46b0c3343da5e699daf5fd3e0af484ddecd21f624462048e4" exitCode=1 Mar 13 10:56:56.811569 master-0 kubenswrapper[33013]: E0313 10:56:56.811434 33013 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 10:56:56.816990 master-0 kubenswrapper[33013]: I0313 10:56:56.816942 33013 generic.go:334] "Generic (PLEG): container finished" podID="a1a998af-4fc0-4078-a6a0-93dde6c00508" containerID="b2d3650b18e8d4e9f38822804153cd7a45f1b0959bcb61f0ce6a90a1570211e0" exitCode=0 Mar 13 10:56:56.825077 master-0 kubenswrapper[33013]: I0313 10:56:56.825009 33013 generic.go:334] "Generic (PLEG): container finished" podID="1434c4a2-5c4d-478a-a16a-7d6a52ea3099" containerID="cd940301b6045fcf3388088b051ec834a3261f017e1dcca1b8063296e4c0a2f1" exitCode=0 Mar 13 10:56:56.829915 master-0 kubenswrapper[33013]: I0313 10:56:56.829849 33013 generic.go:334] "Generic (PLEG): container finished" podID="282bc9ff-1bc0-421b-9cd3-d88d7c5e5303" containerID="44045eb34dbce8a8d8c5bec28be559a0d562acea9909308b142b2b5b5860a229" exitCode=0 Mar 13 10:56:56.833810 master-0 kubenswrapper[33013]: I0313 10:56:56.833762 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/5.log" Mar 13 10:56:56.834840 master-0 kubenswrapper[33013]: I0313 10:56:56.834782 33013 generic.go:334] "Generic (PLEG): container finished" podID="7667717b-fb74-456b-8615-16475cb69e98" containerID="8f8b390ae6e4a037523aeb2d8c83e0584313e3d0ff96486ce09d9902d1586cb0" exitCode=1 Mar 13 10:56:56.838801 master-0 kubenswrapper[33013]: I0313 10:56:56.838751 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-d5b45_8a305f45-8689-45a8-8c8b-5954f2c863df/package-server-manager/0.log" Mar 13 10:56:56.839300 master-0 kubenswrapper[33013]: I0313 10:56:56.839244 33013 generic.go:334] "Generic (PLEG): container finished" podID="8a305f45-8689-45a8-8c8b-5954f2c863df" containerID="89274f7911bc25e38977ddb45d006b7195ff00ecbb96f23c5359ae00a584f176" exitCode=1 Mar 13 10:56:56.847610 master-0 kubenswrapper[33013]: I0313 10:56:56.847531 33013 generic.go:334] "Generic (PLEG): container finished" podID="86ae8cb8-72b3-4be6-9feb-ee0c0da42dba" containerID="5cf7d401ea622e52729b46eea598afe245447756a5d119bc7987bfb6c5cfb794" exitCode=0 Mar 13 10:56:56.849842 master-0 kubenswrapper[33013]: I0313 10:56:56.849800 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c/installer/0.log" Mar 13 10:56:56.849909 master-0 kubenswrapper[33013]: I0313 10:56:56.849852 33013 generic.go:334] "Generic (PLEG): container finished" podID="e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c" containerID="3c84db0498138b2ad19628a630c45e3de3b287d4abdd1560f1b74b129ad3abaf" exitCode=1 Mar 13 10:56:56.852423 master-0 kubenswrapper[33013]: I0313 10:56:56.852388 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-gsr52_070b85a0-f076-4750-aa00-dabba401dc75/cluster-baremetal-operator/1.log" Mar 13 10:56:56.853122 master-0 kubenswrapper[33013]: I0313 10:56:56.853077 33013 generic.go:334] "Generic (PLEG): container finished" podID="070b85a0-f076-4750-aa00-dabba401dc75" containerID="d57d698ca8efd80b0c40df921aa32d90fdd37b423f221fd31fb6e45b5640ad03" exitCode=1 Mar 13 10:56:56.858500 master-0 kubenswrapper[33013]: I0313 10:56:56.858406 33013 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="471db2ea2205b6f1d3a5586cdbba3aa6c38a4e80fcf269848ce63dabe96030ca" exitCode=0 Mar 13 10:56:56.860533 master-0 kubenswrapper[33013]: I0313 10:56:56.860490 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 10:56:56.860967 master-0 kubenswrapper[33013]: I0313 10:56:56.860927 33013 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="609f0e5551a709b73298eb7117d146c048b1a886bac85012fa0f0c1a2a1cd687" exitCode=1 Mar 13 10:56:56.860967 master-0 kubenswrapper[33013]: I0313 10:56:56.860958 33013 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="a6a13e582092662aa7c7eefb83f8515ba545374741aab1781847bd04e676290b" exitCode=0 Mar 13 10:56:56.871931 master-0 kubenswrapper[33013]: I0313 10:56:56.871872 33013 generic.go:334] "Generic (PLEG): container finished" podID="574bf255-14b3-40af-b240-2d3abd5b86b8" containerID="5562479ec1e49b40c330a36ec4d9ac6d15b4428df0c9b17bcdf8d8cf48cf7a09" exitCode=0 Mar 13 10:56:56.873444 master-0 kubenswrapper[33013]: I0313 10:56:56.873403 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_a55a2a95-178c-4fcd-9866-3a149948d1d3/installer/0.log" Mar 13 10:56:56.873538 master-0 kubenswrapper[33013]: I0313 10:56:56.873456 33013 generic.go:334] "Generic (PLEG): container finished" podID="a55a2a95-178c-4fcd-9866-3a149948d1d3" containerID="1095e539909ae9e46360f463a967bbc617daeb2d47612ebdc2519683e6fd658c" exitCode=1 Mar 13 10:56:56.876332 master-0 kubenswrapper[33013]: I0313 10:56:56.876284 33013 generic.go:334] "Generic (PLEG): container finished" podID="26cc0e72-8b4f-4087-89b9-05d2cf6df3f6" containerID="a1bf753439496bde197d1c543409be9bfb058607cd0879d7141d07df38f38943" exitCode=0 Mar 13 10:56:56.883195 master-0 kubenswrapper[33013]: I0313 10:56:56.883145 33013 generic.go:334] "Generic (PLEG): container finished" podID="b8d40b37-0f3d-4531-9fa8-eda965d2337d" containerID="928f705a6df1a237b298e2f772354a8814379ea930e2d466bbe222c0fc185584" exitCode=0 Mar 13 10:56:56.883195 master-0 kubenswrapper[33013]: I0313 10:56:56.883184 33013 generic.go:334] "Generic (PLEG): container finished" podID="b8d40b37-0f3d-4531-9fa8-eda965d2337d" containerID="220a150d44b2158d9daff116df4a5c802964a9b842e1b8dda3de819c2cb69708" exitCode=0 Mar 13 10:56:56.883195 master-0 kubenswrapper[33013]: I0313 10:56:56.883193 33013 generic.go:334] "Generic (PLEG): container finished" podID="b8d40b37-0f3d-4531-9fa8-eda965d2337d" containerID="11c77f1b96585ddf0a5deeffc87c0df0c85a30ab4a6f38b300cbba0aba3b3555" exitCode=0 Mar 13 10:56:56.886109 master-0 kubenswrapper[33013]: I0313 10:56:56.886085 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-hszft_484e6d0b-d057-4658-8e49-bbe7e6f6ee86/control-plane-machine-set-operator/0.log" Mar 13 10:56:56.886206 master-0 kubenswrapper[33013]: I0313 10:56:56.886121 33013 generic.go:334] "Generic (PLEG): container finished" podID="484e6d0b-d057-4658-8e49-bbe7e6f6ee86" containerID="06f340bfe3defa99f6d96411a1e67581d7833b82a603be2ce7a6f91338e36131" exitCode=1 Mar 13 10:56:56.889137 master-0 kubenswrapper[33013]: I0313 10:56:56.889106 33013 generic.go:334] "Generic (PLEG): container finished" podID="1d72d950-cfb4-4ed5-9ad6-f7266b937493" containerID="0647723824e586709d350ad5bb33b6a1dfb3aeaa2aa48bea8b456cd7a39c8a13" exitCode=0 Mar 13 10:56:56.894838 master-0 kubenswrapper[33013]: I0313 10:56:56.894809 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-pzjxd_d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33/cluster-autoscaler-operator/0.log" Mar 13 10:56:56.895319 master-0 kubenswrapper[33013]: I0313 10:56:56.895270 33013 generic.go:334] "Generic (PLEG): container finished" podID="d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33" containerID="33f485f0f2a1052d43c6456fe1c55f48c0eae8c08bc7615626d7dbf11fd3c26a" exitCode=255 Mar 13 10:56:56.902276 master-0 kubenswrapper[33013]: I0313 10:56:56.902232 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_9e06733a-9c47-4bcf-a5e2-946db8e2714b/installer/0.log" Mar 13 10:56:56.902440 master-0 kubenswrapper[33013]: I0313 10:56:56.902299 33013 generic.go:334] "Generic (PLEG): container finished" podID="9e06733a-9c47-4bcf-a5e2-946db8e2714b" containerID="c87d032f992ab15941d07ccbd459ecd39c5fd54e6df8b197a56c0bc747f7d534" exitCode=1 Mar 13 10:56:56.907637 master-0 kubenswrapper[33013]: I0313 10:56:56.907562 33013 generic.go:334] "Generic (PLEG): container finished" podID="aa6a75ab47c06be4e74d05f552da4470" containerID="227c8746e47f893b6d381d14bd366358a094ecb7ef45b704033632f673e46c1d" exitCode=0 Mar 13 10:56:56.910036 master-0 kubenswrapper[33013]: I0313 10:56:56.909993 33013 generic.go:334] "Generic (PLEG): container finished" podID="1d5f5440-b10c-40ea-9f1a-5f03babc1bd9" containerID="4d75e74c4df786ae928889ac54113d7b673c3ebf79a2a08a34f9fbe9b63c1453" exitCode=0 Mar 13 10:56:56.912471 master-0 kubenswrapper[33013]: I0313 10:56:56.912419 33013 generic.go:334] "Generic (PLEG): container finished" podID="0ac1a605-d2d5-4004-96f5-121c20555bde" containerID="9fa1a1f3dc431f4d1989376ade490c97b3ca19baaab0c502fea959b427739c54" exitCode=0 Mar 13 10:56:56.917847 master-0 kubenswrapper[33013]: I0313 10:56:56.917820 33013 generic.go:334] "Generic (PLEG): container finished" podID="d0f42a72-24c7-49e6-8edb-97b2b0d6183a" containerID="d13596a56d4b7303ec265a6d08c85fbe9795571675ab43829e0e95ae8ae9fbbf" exitCode=0 Mar 13 10:56:56.920965 master-0 kubenswrapper[33013]: I0313 10:56:56.920918 33013 generic.go:334] "Generic (PLEG): container finished" podID="5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c" containerID="5ee286f0b3cdb47865421f7ee4618ced9d85dbc545353442dc4336443d56416e" exitCode=0 Mar 13 10:56:56.922937 master-0 kubenswrapper[33013]: I0313 10:56:56.922894 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_7baf3efc-04dc-4c17-9c2a-397ac022d281/installer/0.log" Mar 13 10:56:56.923014 master-0 kubenswrapper[33013]: I0313 10:56:56.922956 33013 generic.go:334] "Generic (PLEG): container finished" podID="7baf3efc-04dc-4c17-9c2a-397ac022d281" containerID="56c9b868392613f72b3a821d9f4fd3508fb4759378ef047d1a2286ea13733ed0" exitCode=1 Mar 13 10:56:56.925560 master-0 kubenswrapper[33013]: I0313 10:56:56.925466 33013 generic.go:334] "Generic (PLEG): container finished" podID="66f49a19-0e3b-4611-b8a6-5f5687fa20b6" containerID="2b215655327c77c15b5c8c962ef77f234a333c87823e067c5e476916a7abcdf5" exitCode=0 Mar 13 10:56:56.927944 master-0 kubenswrapper[33013]: I0313 10:56:56.927914 33013 generic.go:334] "Generic (PLEG): container finished" podID="5b796628-a6ca-4d5c-9870-0ca60b9372aa" containerID="577603115cdc92c071dc30636bcb46ec49417f5c3a611797a0ac27b51d21642e" exitCode=0 Mar 13 10:56:56.929977 master-0 kubenswrapper[33013]: I0313 10:56:56.929943 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-9z8mk_f87662b9-6ac6-44f3-8a16-ff858c2baa91/approver/1.log" Mar 13 10:56:56.930337 master-0 kubenswrapper[33013]: I0313 10:56:56.930309 33013 generic.go:334] "Generic (PLEG): container finished" podID="f87662b9-6ac6-44f3-8a16-ff858c2baa91" containerID="7e21ba1a4a052f4311590e81daf7c7043a43eea8119ade6c511b95ed35202221" exitCode=1 Mar 13 10:56:56.933169 master-0 kubenswrapper[33013]: I0313 10:56:56.933131 33013 generic.go:334] "Generic (PLEG): container finished" podID="4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b" containerID="8e7c2978cc4dfb448748849f09d2780b89faa57635195de3a271f009a5331f69" exitCode=0 Mar 13 10:56:56.935195 master-0 kubenswrapper[33013]: I0313 10:56:56.935154 33013 generic.go:334] "Generic (PLEG): container finished" podID="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" containerID="3e37f1f22df5c284c9d1ba661521c6c1d227be08ffa00372db4208f240cca432" exitCode=0 Mar 13 10:56:56.937415 master-0 kubenswrapper[33013]: I0313 10:56:56.937384 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-f46qd_257a4a8b-014c-4473-80a0-e95cf6d41bf1/manager/1.log" Mar 13 10:56:56.937880 master-0 kubenswrapper[33013]: I0313 10:56:56.937839 33013 generic.go:334] "Generic (PLEG): container finished" podID="257a4a8b-014c-4473-80a0-e95cf6d41bf1" containerID="505693401e0336c91ab91119b9f53889693ae2d79a1c0a657057ebc4d2c80fa9" exitCode=1 Mar 13 10:56:56.942045 master-0 kubenswrapper[33013]: I0313 10:56:56.942014 33013 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="b184c6d2b52d3742ae6eeca434d2692ca2f0557fa56d061b66512b5f8dfea300" exitCode=0 Mar 13 10:56:56.942045 master-0 kubenswrapper[33013]: I0313 10:56:56.942039 33013 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="504900721f39956a914d16037b49b7d46bf9d8673745bf5af0d69241e9d13d4d" exitCode=0 Mar 13 10:56:56.942174 master-0 kubenswrapper[33013]: I0313 10:56:56.942051 33013 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="2c12c83d53e7d737ce9ecb47ead0648457311377deed39823b1a6e3ee6b6647d" exitCode=0 Mar 13 10:56:56.943879 master-0 kubenswrapper[33013]: I0313 10:56:56.943647 33013 generic.go:334] "Generic (PLEG): container finished" podID="533638d2-44ce-4cf8-aa47-a6b89c94621d" containerID="fe9e59028a5e05ef377e39eb4fc61f98da9b8df986b802547501f57b158fbf17" exitCode=0 Mar 13 10:56:56.947523 master-0 kubenswrapper[33013]: I0313 10:56:56.947481 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-9fptc_42b4d53c-af72-44c8-9605-271445f95f87/cluster-node-tuning-operator/0.log" Mar 13 10:56:56.947634 master-0 kubenswrapper[33013]: I0313 10:56:56.947543 33013 generic.go:334] "Generic (PLEG): container finished" podID="42b4d53c-af72-44c8-9605-271445f95f87" containerID="4898ddf0b80011b0f9f0a24077d87c24f74962cf228e87be2367d09c896182b1" exitCode=1 Mar 13 10:56:56.950165 master-0 kubenswrapper[33013]: I0313 10:56:56.950138 33013 generic.go:334] "Generic (PLEG): container finished" podID="c09f42db-e6d7-469d-9761-88a879f6aa6b" containerID="a8df02bb41a45b57cf8e71e70880ad2fbf324a4b46f7ee5205697f332f790983" exitCode=0 Mar 13 10:56:56.952031 master-0 kubenswrapper[33013]: I0313 10:56:56.951995 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_feb7b798-15b5-4004-87d0-96ce9381cdbe/installer/0.log" Mar 13 10:56:56.952106 master-0 kubenswrapper[33013]: I0313 10:56:56.952040 33013 generic.go:334] "Generic (PLEG): container finished" podID="feb7b798-15b5-4004-87d0-96ce9381cdbe" containerID="28aad4d86302888f158c61e3738904f7d878550af4392e7ed53add211247a0cd" exitCode=1 Mar 13 10:56:56.957352 master-0 kubenswrapper[33013]: I0313 10:56:56.957308 33013 generic.go:334] "Generic (PLEG): container finished" podID="1c12a5d5-711f-4663-974c-c4b06e15fc39" containerID="3711f960c560ecb4568aab641312d36db294714abc5c774ce0693e59fb2ba6d8" exitCode=0 Mar 13 10:56:56.959873 master-0 kubenswrapper[33013]: I0313 10:56:56.959830 33013 generic.go:334] "Generic (PLEG): container finished" podID="9da11462-a91d-4d02-8614-78b4c5b2f7e2" containerID="00da2a7b5527973fbd194100f44590333c80d5dcf0e49c8db3fcca2c086cc934" exitCode=0 Mar 13 10:56:56.966044 master-0 kubenswrapper[33013]: I0313 10:56:56.965964 33013 generic.go:334] "Generic (PLEG): container finished" podID="e0d0a863-e526-43af-81e7-427336d845b0" containerID="fe9c58db2cbc934a8ee0143a63a15e0c0fbc1471f2636da95b789cc5a70ed0f0" exitCode=0 Mar 13 10:56:56.970505 master-0 kubenswrapper[33013]: I0313 10:56:56.970465 33013 generic.go:334] "Generic (PLEG): container finished" podID="1769d48d-7ef0-48ee-9b7d-b46151ae5df6" containerID="5399579cbf50883dcc4aa7699616e64f69ad85ad80602aae96557b44afc05a5a" exitCode=0 Mar 13 10:56:56.986915 master-0 kubenswrapper[33013]: I0313 10:56:56.986862 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-cbhxt_6622be09-206e-4d02-90ca-6d9f2fc852aa/snapshot-controller/3.log" Mar 13 10:56:56.987010 master-0 kubenswrapper[33013]: I0313 10:56:56.986955 33013 generic.go:334] "Generic (PLEG): container finished" podID="6622be09-206e-4d02-90ca-6d9f2fc852aa" containerID="5e8801f6c03277ad0f15ce8a685fd31f58e857c66d9382e667630e4deb5cc346" exitCode=1 Mar 13 10:56:56.991997 master-0 kubenswrapper[33013]: I0313 10:56:56.991930 33013 generic.go:334] "Generic (PLEG): container finished" podID="4a1b43c4-55b9-4c72-ba7c-9089bf28cf16" containerID="a89f2be4905476a4b6dbbd07f3ca4359a228444679e496e247030ce754dfdd31" exitCode=0 Mar 13 10:56:56.991997 master-0 kubenswrapper[33013]: I0313 10:56:56.991988 33013 generic.go:334] "Generic (PLEG): container finished" podID="4a1b43c4-55b9-4c72-ba7c-9089bf28cf16" containerID="4eac15e946c14f20f0a00649c87e90500eec23139a51731688b3e55b52f0796d" exitCode=0 Mar 13 10:56:56.993791 master-0 kubenswrapper[33013]: I0313 10:56:56.993741 33013 generic.go:334] "Generic (PLEG): container finished" podID="37b2e803-302b-4650-b18f-d3d2dd703bd5" containerID="881405211eef76d473660b20a0d3c866e54acadcefe8c182ab1f5f97e108929c" exitCode=0 Mar 13 10:56:56.996485 master-0 kubenswrapper[33013]: I0313 10:56:56.996457 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-bg6zf_b10584c2-ef04-4649-bcb6-9222c9530c3f/manager/1.log" Mar 13 10:56:56.996899 master-0 kubenswrapper[33013]: I0313 10:56:56.996868 33013 generic.go:334] "Generic (PLEG): container finished" podID="b10584c2-ef04-4649-bcb6-9222c9530c3f" containerID="2a9aaa81e2cc4ad44480999dff8ac1b2c80678408fd67b6fb365310487f92570" exitCode=1 Mar 13 10:56:56.998744 master-0 kubenswrapper[33013]: I0313 10:56:56.998707 33013 generic.go:334] "Generic (PLEG): container finished" podID="beee81ef-5a3a-4df2-85d5-2573679d261f" containerID="5981b0f268f1a64d5e07b672a70671406c05cd6d7d9cce3115bdfd6054d046d6" exitCode=0 Mar 13 10:56:56.998744 master-0 kubenswrapper[33013]: I0313 10:56:56.998739 33013 generic.go:334] "Generic (PLEG): container finished" podID="beee81ef-5a3a-4df2-85d5-2573679d261f" containerID="3d29df9026b8be32c69c5d366778bdae010d5195fd7cffbac836292c45f99342" exitCode=0 Mar 13 10:56:57.003221 master-0 kubenswrapper[33013]: I0313 10:56:57.003179 33013 generic.go:334] "Generic (PLEG): container finished" podID="b9624a9a-68dd-4cc1-a0a4-23fe297ceba3" containerID="5a756cbc772c72bcdf3f7b55e67e0c66e077c8bc9496058fd8ad31da12ffe6d7" exitCode=0 Mar 13 10:56:57.007000 master-0 kubenswrapper[33013]: I0313 10:56:57.006956 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-jcn8f_ec121f87-93ea-468c-a25f-2ec5e7d0e0ee/machine-approver-controller/0.log" Mar 13 10:56:57.007335 master-0 kubenswrapper[33013]: I0313 10:56:57.007293 33013 generic.go:334] "Generic (PLEG): container finished" podID="ec121f87-93ea-468c-a25f-2ec5e7d0e0ee" containerID="64678ebcb68e6bed917a1b002aba4f9986d59e81a6fdab83010f8da8b3807323" exitCode=255 Mar 13 10:56:57.010325 master-0 kubenswrapper[33013]: I0313 10:56:57.010287 33013 generic.go:334] "Generic (PLEG): container finished" podID="5ed5e77b-948b-4d94-ac9f-440ee3c07e18" containerID="7f952b61d71e907b8ab35c403ca342055b58e2b44f1c8092061e8d04df9ac501" exitCode=0 Mar 13 10:56:57.011861 master-0 kubenswrapper[33013]: E0313 10:56:57.011832 33013 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 10:56:57.012406 master-0 kubenswrapper[33013]: I0313 10:56:57.012365 33013 generic.go:334] "Generic (PLEG): container finished" podID="c87545aa-11c2-4e6e-8c13-16eeff3be83b" containerID="a54ca7738955f7ec185b4cde3784d0158686a36edc078876172035717347c129" exitCode=0 Mar 13 10:56:57.015134 master-0 kubenswrapper[33013]: I0313 10:56:57.015099 33013 generic.go:334] "Generic (PLEG): container finished" podID="549bd192-0235-4994-b485-f1b10d16f6b5" containerID="271da4cc5b20956051ed1d7f97405260dffc34901d137d8e75b3c407349229eb" exitCode=0 Mar 13 10:56:57.217674 master-0 kubenswrapper[33013]: I0313 10:56:57.217451 33013 manager.go:324] Recovery completed Mar 13 10:56:57.321831 master-0 kubenswrapper[33013]: I0313 10:56:57.321772 33013 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 13 10:56:57.321831 master-0 kubenswrapper[33013]: I0313 10:56:57.321809 33013 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 10:56:57.321831 master-0 kubenswrapper[33013]: I0313 10:56:57.321843 33013 state_mem.go:36] "Initialized new in-memory state store" Mar 13 10:56:57.322152 master-0 kubenswrapper[33013]: I0313 10:56:57.322025 33013 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 10:56:57.322152 master-0 kubenswrapper[33013]: I0313 10:56:57.322036 33013 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 10:56:57.322152 master-0 kubenswrapper[33013]: I0313 10:56:57.322056 33013 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 13 10:56:57.322152 master-0 kubenswrapper[33013]: I0313 10:56:57.322062 33013 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 13 10:56:57.322152 master-0 kubenswrapper[33013]: I0313 10:56:57.322068 33013 policy_none.go:49] "None policy: Start" Mar 13 10:56:57.325177 master-0 kubenswrapper[33013]: I0313 10:56:57.325128 33013 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 13 10:56:57.325247 master-0 kubenswrapper[33013]: I0313 10:56:57.325193 33013 state_mem.go:35] "Initializing new in-memory state store" Mar 13 10:56:57.325478 master-0 kubenswrapper[33013]: I0313 10:56:57.325449 33013 state_mem.go:75] "Updated machine memory state" Mar 13 10:56:57.325478 master-0 kubenswrapper[33013]: I0313 10:56:57.325467 33013 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 13 10:56:57.347918 master-0 kubenswrapper[33013]: I0313 10:56:57.347824 33013 manager.go:334] "Starting Device Plugin manager" Mar 13 10:56:57.348207 master-0 kubenswrapper[33013]: I0313 10:56:57.348082 33013 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 13 10:56:57.348207 master-0 kubenswrapper[33013]: I0313 10:56:57.348140 33013 server.go:79] "Starting device plugin registration server" Mar 13 10:56:57.348967 master-0 kubenswrapper[33013]: I0313 10:56:57.348930 33013 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 10:56:57.349054 master-0 kubenswrapper[33013]: I0313 10:56:57.348950 33013 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 10:56:57.349190 master-0 kubenswrapper[33013]: I0313 10:56:57.349129 33013 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 13 10:56:57.349323 master-0 kubenswrapper[33013]: I0313 10:56:57.349291 33013 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 13 10:56:57.349323 master-0 kubenswrapper[33013]: I0313 10:56:57.349305 33013 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 10:56:57.412481 master-0 kubenswrapper[33013]: I0313 10:56:57.412436 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3684ce24f4407551543f74ac9f1a5ab3d105e55ba443e4519febf4f030d8826c" Mar 13 10:56:57.412481 master-0 kubenswrapper[33013]: I0313 10:56:57.412479 33013 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 10:56:57.414987 master-0 kubenswrapper[33013]: I0313 10:56:57.414961 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8806b35c314af20732bacfc49d8a1556a0a610503737d33085a256d12444c681" Mar 13 10:56:57.415137 master-0 kubenswrapper[33013]: I0313 10:56:57.414987 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a275b154e2bd75d46956f1b7e89d0825c0f4544634205616a815e2c59d1fd381" Mar 13 10:56:57.415137 master-0 kubenswrapper[33013]: I0313 10:56:57.415117 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b45c64d6449de0fbb67e8c6c87b585367854c2872ab4281e8171784f28b9d333" Mar 13 10:56:57.415363 master-0 kubenswrapper[33013]: I0313 10:56:57.415182 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"a6907ade1777d6a7c993aeb23acaeb6fdd891b625a9b035210953700ede72f63"} Mar 13 10:56:57.415363 master-0 kubenswrapper[33013]: I0313 10:56:57.415293 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"3b9f539be02f519c82f90f79644538b0615d221de57b1fd6c7c4726d8ebe602e"} Mar 13 10:56:57.415524 master-0 kubenswrapper[33013]: I0313 10:56:57.415511 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b6e436b9cc09a918e66ed32313004ed8edc16d26f739da2414fd5c6347334d8" Mar 13 10:56:57.415657 master-0 kubenswrapper[33013]: I0313 10:56:57.415532 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"34c71f954534cca434c7c802f7855a3b7861fd19181e83ce6a5c7e4eadd5d1b3"} Mar 13 10:56:57.415657 master-0 kubenswrapper[33013]: I0313 10:56:57.415546 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"ba51f5c967029e2b068021d1a83ee4f598eaadf9c3a2516df68908ac239d7445"} Mar 13 10:56:57.415657 master-0 kubenswrapper[33013]: I0313 10:56:57.415572 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"87ad10810594d3a8b47e2d2f0ea99d2d22bd3431702cd859d74cd9c630e59378"} Mar 13 10:56:57.415657 master-0 kubenswrapper[33013]: I0313 10:56:57.415600 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"1ecea7b8d5e69133ad13b5b777fbc920ab41ff3583523c6b2276f6193ca1bf07"} Mar 13 10:56:57.415657 master-0 kubenswrapper[33013]: I0313 10:56:57.415631 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"78a5083f0f4488ca7c6e4d90cf72bc643a68b4410b27b3743964b73f858c2984"} Mar 13 10:56:57.415657 master-0 kubenswrapper[33013]: I0313 10:56:57.415644 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerDied","Data":"471db2ea2205b6f1d3a5586cdbba3aa6c38a4e80fcf269848ce63dabe96030ca"} Mar 13 10:56:57.415657 master-0 kubenswrapper[33013]: I0313 10:56:57.415678 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"c797020833454d5ed2c33acc860a0f30fce513778328e3b025208a981e1fff3f"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.415690 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"9c670eb6abb5de03cd978fcc4efcfd81c65dafc0d610959d205735ca6df3ab91"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.415714 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"609f0e5551a709b73298eb7117d146c048b1a886bac85012fa0f0c1a2a1cd687"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.415738 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"a6a13e582092662aa7c7eefb83f8515ba545374741aab1781847bd04e676290b"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.415761 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"2c906939264631f5617f60445cdb650e10cc3bf3d0cf16dc4b104f010debfbc1"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.415800 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07760884fb73f623d10ed12cbe3f37005e2db59b258a61a52af5d3fc8c6b9063" Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.415888 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8624adf36154fe1f7cdb5c9eb99ed2b301e80e18fd1f6d8154a250c1a73d647b" Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.415899 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"19dc3a66f25c011c8069f1ee0dadbbce99939d7e2ec153af7962229cb1af28b2"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.415926 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"d7b0acead87987f502b3f41605ffb9cdd08548721125b8c7786f9988d47d3b01"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.415948 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"1e5bb19d8f372bc256a34ecf958e795ff2a4e0422d6d1eb0385564047658d8ca"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.415971 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerDied","Data":"227c8746e47f893b6d381d14bd366358a094ecb7ef45b704033632f673e46c1d"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.415995 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"a63b48f34aee10c2a5c7b02c8aaeb3e69aba820b8bbd971ae28a10945e9803c8"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416040 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e94e101afe6d310b4795ed9ac97800bdab3626ccbb55e076af5e0699e89feaba" Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416063 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13b9f88ad828dc6f0b9caaa000ec4304ee1cea2959cd111893dbdd54815ac13d" Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416142 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"5a793ad00a79db57ae38050e7749ba9d9b9d24a798febba0cba49980889c7482"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416165 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"7dca9ce0c495134e155aab91ff3f2ccfbf29b25d2e905ee8170df03b7df6823b"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416188 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"2f6be16b300f0db83df5af0658e94350a338f2488c82335f15c838e841d5ec1e"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416199 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"669c949c8fe3c563ab473f1617a1daafb359deef2739ada0b41fbbdd93bb8d46"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416210 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"ca7636226884dc934652ea1520a35839a32c066fbf42abbabb2eb40d4d464bfd"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416221 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"b184c6d2b52d3742ae6eeca434d2692ca2f0557fa56d061b66512b5f8dfea300"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416244 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"504900721f39956a914d16037b49b7d46bf9d8673745bf5af0d69241e9d13d4d"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416266 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"2c12c83d53e7d737ce9ecb47ead0648457311377deed39823b1a6e3ee6b6647d"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416288 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"c7d9738c3adc0c979eef42141f9dc2b629b15190348d5c5364a237fdd93a9dff"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416304 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ab44048b41e7d6482d53b636df4ef12bcf58ac194024559096f0e679ffee57" Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416348 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54845f97730049024e50483462ec2fdbbd2a3bf95c64c4c162c260a6e6834b4f" Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416396 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ab9eaaa1f8cf34f71e1913b674d1f9da187c6ac13d0953972a6b0cfd8a11121" Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416415 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0edd269bc9bfd58457b3b88ab218fa96e34778af571fb8288c4d56256e1a1e4d" Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416425 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6aa84d96c35221e650d254cec915ee90","Type":"ContainerStarted","Data":"8ef4ca3fd55a1fdc272bbe95b06fd59615f0875eb40d0760256756564104e8c0"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416448 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6aa84d96c35221e650d254cec915ee90","Type":"ContainerStarted","Data":"c628e765eaabffc23db2c1635eeb15519da1c1cbfb8a52269fa9da1481c956a3"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416471 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6aa84d96c35221e650d254cec915ee90","Type":"ContainerStarted","Data":"9055e315c8a514a2e7caff4002ccd935f6b8f26c1543cb6f8b2224217493efae"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416484 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6aa84d96c35221e650d254cec915ee90","Type":"ContainerStarted","Data":"b13c60bcec66207b3ea2a744a2bea2122f3896924902c67d892d48a026ec7cde"} Mar 13 10:56:57.416455 master-0 kubenswrapper[33013]: I0313 10:56:57.416516 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6aa84d96c35221e650d254cec915ee90","Type":"ContainerStarted","Data":"c7ee7c0157ea1a3aa7abb463c26a21b9f9c80a7c51726be3dad9c08112783426"} Mar 13 10:56:57.419060 master-0 kubenswrapper[33013]: I0313 10:56:57.416620 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9681f2e75cd38c3ac67ed3e69a8ec48ca8451d34a1c4febdd60d09ed10b5be76" Mar 13 10:56:57.431736 master-0 kubenswrapper[33013]: E0313 10:56:57.431654 33013 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:56:57.432431 master-0 kubenswrapper[33013]: E0313 10:56:57.432386 33013 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:56:57.432817 master-0 kubenswrapper[33013]: E0313 10:56:57.432776 33013 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.449475 master-0 kubenswrapper[33013]: I0313 10:56:57.449424 33013 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:56:57.452431 master-0 kubenswrapper[33013]: I0313 10:56:57.452386 33013 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:56:57.452431 master-0 kubenswrapper[33013]: I0313 10:56:57.452431 33013 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:56:57.452545 master-0 kubenswrapper[33013]: I0313 10:56:57.452441 33013 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:56:57.452545 master-0 kubenswrapper[33013]: I0313 10:56:57.452518 33013 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:56:57.455976 master-0 kubenswrapper[33013]: E0313 10:56:57.455930 33013 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 13 10:56:57.525252 master-0 kubenswrapper[33013]: I0313 10:56:57.525110 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.525252 master-0 kubenswrapper[33013]: I0313 10:56:57.525156 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:56:57.525252 master-0 kubenswrapper[33013]: I0313 10:56:57.525179 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:56:57.525252 master-0 kubenswrapper[33013]: I0313 10:56:57.525197 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:57.525252 master-0 kubenswrapper[33013]: I0313 10:56:57.525214 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:57.525252 master-0 kubenswrapper[33013]: I0313 10:56:57.525257 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.525677 master-0 kubenswrapper[33013]: I0313 10:56:57.525277 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.525677 master-0 kubenswrapper[33013]: I0313 10:56:57.525304 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:56:57.525677 master-0 kubenswrapper[33013]: I0313 10:56:57.525327 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.525677 master-0 kubenswrapper[33013]: I0313 10:56:57.525341 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:57.525677 master-0 kubenswrapper[33013]: I0313 10:56:57.525404 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.525677 master-0 kubenswrapper[33013]: I0313 10:56:57.525450 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6aa84d96c35221e650d254cec915ee90\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:56:57.525677 master-0 kubenswrapper[33013]: I0313 10:56:57.525472 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6aa84d96c35221e650d254cec915ee90\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:56:57.525677 master-0 kubenswrapper[33013]: I0313 10:56:57.525491 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.525677 master-0 kubenswrapper[33013]: I0313 10:56:57.525506 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.525677 master-0 kubenswrapper[33013]: I0313 10:56:57.525520 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.525677 master-0 kubenswrapper[33013]: I0313 10:56:57.525536 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.525677 master-0 kubenswrapper[33013]: I0313 10:56:57.525567 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.525677 master-0 kubenswrapper[33013]: I0313 10:56:57.525607 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:56:57.525677 master-0 kubenswrapper[33013]: I0313 10:56:57.525629 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.627117 master-0 kubenswrapper[33013]: I0313 10:56:57.627041 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627175 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627315 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627353 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627374 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627397 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627414 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627433 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627448 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627462 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6aa84d96c35221e650d254cec915ee90\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627488 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6aa84d96c35221e650d254cec915ee90\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627491 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627503 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627546 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627549 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627620 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6aa84d96c35221e650d254cec915ee90\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627652 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6aa84d96c35221e650d254cec915ee90\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627689 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627747 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627777 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627773 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627811 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627830 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627808 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627862 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627896 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627942 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627961 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627977 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627991 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.627992 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.628019 master-0 kubenswrapper[33013]: I0313 10:56:57.628010 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.630227 master-0 kubenswrapper[33013]: I0313 10:56:57.628062 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.630227 master-0 kubenswrapper[33013]: I0313 10:56:57.628157 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.630227 master-0 kubenswrapper[33013]: I0313 10:56:57.628214 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.630227 master-0 kubenswrapper[33013]: I0313 10:56:57.628269 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.630227 master-0 kubenswrapper[33013]: I0313 10:56:57.628302 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.630227 master-0 kubenswrapper[33013]: I0313 10:56:57.628343 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 10:56:57.630227 master-0 kubenswrapper[33013]: I0313 10:56:57.628383 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:56:57.630227 master-0 kubenswrapper[33013]: I0313 10:56:57.628455 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:56:57.656072 master-0 kubenswrapper[33013]: I0313 10:56:57.656009 33013 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:56:57.658187 master-0 kubenswrapper[33013]: I0313 10:56:57.658115 33013 apiserver.go:52] "Watching apiserver" Mar 13 10:56:57.659416 master-0 kubenswrapper[33013]: I0313 10:56:57.659355 33013 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:56:57.659501 master-0 kubenswrapper[33013]: I0313 10:56:57.659463 33013 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:56:57.659576 master-0 kubenswrapper[33013]: I0313 10:56:57.659493 33013 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:56:57.660019 master-0 kubenswrapper[33013]: I0313 10:56:57.659848 33013 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:56:57.664621 master-0 kubenswrapper[33013]: E0313 10:56:57.664515 33013 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 13 10:56:57.687409 master-0 kubenswrapper[33013]: I0313 10:56:57.687352 33013 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 10:56:57.689977 master-0 kubenswrapper[33013]: I0313 10:56:57.689889 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp","openshift-monitoring/telemeter-client-6745c97c48-85rlf","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2","openshift-dns-operator/dns-operator-589895fbb7-wjrpm","openshift-ingress/router-default-79f8cd6fdd-b4x54","openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs","openshift-monitoring/metrics-server-68597ccc5b-xrb8c","openshift-monitoring/node-exporter-mtcsw","openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd","openshift-ingress-canary/ingress-canary-dxhl9","openshift-kube-apiserver/installer-1-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-machine-config-operator/machine-config-daemon-gdfnq","openshift-network-diagnostics/network-check-source-7c67b67d47-jbx9v","openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq","openshift-dns/dns-default-zc596","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv","openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn","openshift-dns/node-resolver-tfwn8","openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr","openshift-network-node-identity/network-node-identity-9z8mk","openshift-network-operator/network-operator-7c649bf6d4-6vpl4","openshift-multus/network-metrics-daemon-jz2lp","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc","openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74","openshift-etcd/installer-2-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-marketplace/marketplace-operator-64bf9778cb-85x6d","openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h","openshift-etcd/etcd-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh","openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt","openshift-ingress-operator/ingress-operator-677db989d6-tzd9b","openshift-kube-controller-manager/installer-1-master-0","openshift-marketplace/community-operators-vr4ts","openshift-service-ca/service-ca-84bfdbbb7f-l8h7l","openshift-kube-controller-manager/installer-4-master-0","openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft","openshift-marketplace/certified-operators-bgvrc","openshift-apiserver/apiserver-65bc99cdf7-7rjbr","openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8","openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx","openshift-multus/multus-additional-cni-plugins-mc5nc","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd","openshift-marketplace/redhat-marketplace-mrztj","openshift-marketplace/redhat-operators-jdzpd","openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl","openshift-kube-scheduler/installer-4-master-0","openshift-kube-scheduler/installer-5-master-0","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz","openshift-oauth-apiserver/apiserver-778fb45b4-65f7b","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl","openshift-ovn-kubernetes/ovnkube-node-hztqp","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8","openshift-kube-apiserver/installer-2-master-0","openshift-machine-config-operator/machine-config-server-mhk8z","openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv","openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm","openshift-network-diagnostics/network-check-target-96vwf","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv","openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw","openshift-etcd/installer-1-master-0","openshift-kube-apiserver/installer-3-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz","openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g","assisted-installer/assisted-installer-controller-s68gq","openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-controller-manager/installer-3-retry-1-master-0","openshift-kube-scheduler/installer-6-master-0","openshift-multus/multus-admission-controller-7769569c45-rshw5","openshift-multus/multus-qng6t","openshift-network-operator/iptables-alerter-gdjjd","openshift-cluster-node-tuning-operator/tuned-7wkqw","openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst","openshift-controller-manager/controller-manager-867876d6b6-tpq67","openshift-kube-controller-manager/installer-3-master-0","openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm","openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt","openshift-insights/insights-operator-8f89dfddd-nhsd9","openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52"] Mar 13 10:56:57.690282 master-0 kubenswrapper[33013]: I0313 10:56:57.690211 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-s68gq" Mar 13 10:56:57.693346 master-0 kubenswrapper[33013]: I0313 10:56:57.693296 33013 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="1018719a-c6e6-4625-9309-9302ae0dfe9b" Mar 13 10:56:57.696967 master-0 kubenswrapper[33013]: I0313 10:56:57.696901 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 10:56:57.696967 master-0 kubenswrapper[33013]: I0313 10:56:57.696901 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 10:56:57.697565 master-0 kubenswrapper[33013]: I0313 10:56:57.697034 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 10:56:57.697868 master-0 kubenswrapper[33013]: I0313 10:56:57.697822 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 10:56:57.697868 master-0 kubenswrapper[33013]: I0313 10:56:57.697840 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 10:56:57.698256 master-0 kubenswrapper[33013]: I0313 10:56:57.698052 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 10:56:57.703454 master-0 kubenswrapper[33013]: I0313 10:56:57.701146 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 10:56:57.703454 master-0 kubenswrapper[33013]: I0313 10:56:57.701220 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 10:56:57.704047 master-0 kubenswrapper[33013]: I0313 10:56:57.701287 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 10:56:57.704047 master-0 kubenswrapper[33013]: I0313 10:56:57.703964 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 10:56:57.704047 master-0 kubenswrapper[33013]: I0313 10:56:57.703993 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 10:56:57.704383 master-0 kubenswrapper[33013]: I0313 10:56:57.704223 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 10:56:57.704383 master-0 kubenswrapper[33013]: I0313 10:56:57.704272 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 10:56:57.704383 master-0 kubenswrapper[33013]: I0313 10:56:57.704289 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 10:56:57.705141 master-0 kubenswrapper[33013]: I0313 10:56:57.704399 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 10:56:57.705141 master-0 kubenswrapper[33013]: I0313 10:56:57.704607 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 10:56:57.705141 master-0 kubenswrapper[33013]: I0313 10:56:57.704648 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 10:56:57.705141 master-0 kubenswrapper[33013]: I0313 10:56:57.704752 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 10:56:57.705141 master-0 kubenswrapper[33013]: I0313 10:56:57.704878 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 10:56:57.705562 master-0 kubenswrapper[33013]: I0313 10:56:57.705225 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 10:56:57.705562 master-0 kubenswrapper[33013]: I0313 10:56:57.705530 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 10:56:57.706056 master-0 kubenswrapper[33013]: I0313 10:56:57.705994 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 10:56:57.707925 master-0 kubenswrapper[33013]: I0313 10:56:57.707866 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 10:56:57.708548 master-0 kubenswrapper[33013]: I0313 10:56:57.708491 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 10:56:57.709928 master-0 kubenswrapper[33013]: I0313 10:56:57.709492 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 10:56:57.709928 master-0 kubenswrapper[33013]: I0313 10:56:57.709807 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.710029 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.710146 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.710325 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.710488 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.710655 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.710689 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.710851 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.710944 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.710984 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.711200 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.711207 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.711337 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.711347 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.711401 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.711554 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.711574 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.711731 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.712078 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 10:56:57.712189 master-0 kubenswrapper[33013]: I0313 10:56:57.712155 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 10:56:57.713966 master-0 kubenswrapper[33013]: I0313 10:56:57.712726 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 10:56:57.713966 master-0 kubenswrapper[33013]: I0313 10:56:57.712731 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 10:56:57.714427 master-0 kubenswrapper[33013]: I0313 10:56:57.714336 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 10:56:57.715799 master-0 kubenswrapper[33013]: I0313 10:56:57.715629 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 10:56:57.715983 master-0 kubenswrapper[33013]: I0313 10:56:57.715874 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 10:56:57.715983 master-0 kubenswrapper[33013]: I0313 10:56:57.715929 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 10:56:57.716168 master-0 kubenswrapper[33013]: I0313 10:56:57.715942 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 10:56:57.716168 master-0 kubenswrapper[33013]: I0313 10:56:57.716037 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 10:56:57.716168 master-0 kubenswrapper[33013]: I0313 10:56:57.715942 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 10:56:57.716168 master-0 kubenswrapper[33013]: I0313 10:56:57.715946 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 10:56:57.717693 master-0 kubenswrapper[33013]: I0313 10:56:57.717661 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 10:56:57.717877 master-0 kubenswrapper[33013]: I0313 10:56:57.717752 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 10:56:57.717994 master-0 kubenswrapper[33013]: I0313 10:56:57.717969 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 10:56:57.718280 master-0 kubenswrapper[33013]: I0313 10:56:57.718252 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 10:56:57.719403 master-0 kubenswrapper[33013]: I0313 10:56:57.718915 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.724304 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.724495 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.724556 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.724692 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.724692 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.724849 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.724913 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.725414 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.725635 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.730567 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.730653 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.730869 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.731447 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.731538 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.731633 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 10:56:57.732362 master-0 kubenswrapper[33013]: I0313 10:56:57.732320 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 10:56:57.734513 master-0 kubenswrapper[33013]: I0313 10:56:57.732486 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 10:56:57.734513 master-0 kubenswrapper[33013]: I0313 10:56:57.733319 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 10:56:57.735248 master-0 kubenswrapper[33013]: I0313 10:56:57.734810 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 10:56:57.736829 master-0 kubenswrapper[33013]: I0313 10:56:57.736551 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 10:56:57.737190 master-0 kubenswrapper[33013]: I0313 10:56:57.737071 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 10:56:57.737527 master-0 kubenswrapper[33013]: I0313 10:56:57.737490 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 10:56:57.737741 master-0 kubenswrapper[33013]: I0313 10:56:57.737625 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 10:56:57.737741 master-0 kubenswrapper[33013]: I0313 10:56:57.737648 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 10:56:57.739543 master-0 kubenswrapper[33013]: I0313 10:56:57.737842 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 10:56:57.739543 master-0 kubenswrapper[33013]: I0313 10:56:57.737902 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 10:56:57.739543 master-0 kubenswrapper[33013]: I0313 10:56:57.737914 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 10:56:57.739543 master-0 kubenswrapper[33013]: I0313 10:56:57.738199 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 10:56:57.739543 master-0 kubenswrapper[33013]: I0313 10:56:57.738656 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 10:56:57.739543 master-0 kubenswrapper[33013]: I0313 10:56:57.738932 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 10:56:57.739543 master-0 kubenswrapper[33013]: I0313 10:56:57.739048 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 10:56:57.739543 master-0 kubenswrapper[33013]: I0313 10:56:57.739051 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 10:56:57.739543 master-0 kubenswrapper[33013]: I0313 10:56:57.739341 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 10:56:57.740757 master-0 kubenswrapper[33013]: I0313 10:56:57.739809 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 10:56:57.740757 master-0 kubenswrapper[33013]: I0313 10:56:57.740051 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 10:56:57.741496 master-0 kubenswrapper[33013]: I0313 10:56:57.741434 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 10:56:57.742097 master-0 kubenswrapper[33013]: I0313 10:56:57.742041 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:57.742250 master-0 kubenswrapper[33013]: I0313 10:56:57.742200 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 13 10:56:57.742445 master-0 kubenswrapper[33013]: I0313 10:56:57.742378 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 10:56:57.742700 master-0 kubenswrapper[33013]: I0313 10:56:57.742571 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 10:56:57.742919 master-0 kubenswrapper[33013]: I0313 10:56:57.742875 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 10:56:57.743110 master-0 kubenswrapper[33013]: I0313 10:56:57.743065 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 10:56:57.743302 master-0 kubenswrapper[33013]: I0313 10:56:57.743267 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 10:56:57.743302 master-0 kubenswrapper[33013]: I0313 10:56:57.743284 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 10:56:57.743575 master-0 kubenswrapper[33013]: I0313 10:56:57.743482 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 10:56:57.751264 master-0 kubenswrapper[33013]: I0313 10:56:57.750880 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 10:56:57.751717 master-0 kubenswrapper[33013]: I0313 10:56:57.751568 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 10:56:57.752442 master-0 kubenswrapper[33013]: I0313 10:56:57.752281 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 10:56:57.753255 master-0 kubenswrapper[33013]: I0313 10:56:57.753030 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 10:56:57.754519 master-0 kubenswrapper[33013]: I0313 10:56:57.753750 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 10:56:57.755550 master-0 kubenswrapper[33013]: I0313 10:56:57.755203 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 10:56:57.760551 master-0 kubenswrapper[33013]: I0313 10:56:57.760516 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 10:56:57.760950 master-0 kubenswrapper[33013]: I0313 10:56:57.760827 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 10:56:57.761015 master-0 kubenswrapper[33013]: I0313 10:56:57.760983 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 10:56:57.790003 master-0 kubenswrapper[33013]: I0313 10:56:57.789886 33013 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 13 10:56:57.800549 master-0 kubenswrapper[33013]: I0313 10:56:57.800513 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 10:56:57.821247 master-0 kubenswrapper[33013]: I0313 10:56:57.821199 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 10:56:57.829224 master-0 kubenswrapper[33013]: I0313 10:56:57.829170 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-ovn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.829390 master-0 kubenswrapper[33013]: I0313 10:56:57.829225 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-env-overrides\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.829390 master-0 kubenswrapper[33013]: I0313 10:56:57.829257 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:56:57.829390 master-0 kubenswrapper[33013]: I0313 10:56:57.829281 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p29zg\" (UniqueName: \"kubernetes.io/projected/a1a998af-4fc0-4078-a6a0-93dde6c00508-kube-api-access-p29zg\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:56:57.829390 master-0 kubenswrapper[33013]: I0313 10:56:57.829306 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.829651 master-0 kubenswrapper[33013]: I0313 10:56:57.829615 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:56:57.829719 master-0 kubenswrapper[33013]: I0313 10:56:57.829696 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:56:57.829756 master-0 kubenswrapper[33013]: I0313 10:56:57.829728 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-config\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:56:57.829789 master-0 kubenswrapper[33013]: I0313 10:56:57.829743 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-env-overrides\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.829789 master-0 kubenswrapper[33013]: I0313 10:56:57.829754 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d9075a44-22d3-4562-819e-d5a92f013663-tmp\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.829854 master-0 kubenswrapper[33013]: I0313 10:56:57.829814 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjk5l\" (UniqueName: \"kubernetes.io/projected/6ed47c57-533f-43e4-88eb-07da29b4878f-kube-api-access-rjk5l\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:56:57.829895 master-0 kubenswrapper[33013]: I0313 10:56:57.829849 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:56:57.829951 master-0 kubenswrapper[33013]: I0313 10:56:57.829929 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d9075a44-22d3-4562-819e-d5a92f013663-tmp\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.830014 master-0 kubenswrapper[33013]: I0313 10:56:57.829946 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-config\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:56:57.830056 master-0 kubenswrapper[33013]: I0313 10:56:57.830018 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:56:57.830094 master-0 kubenswrapper[33013]: I0313 10:56:57.830077 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-audit-policies\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:57.830124 master-0 kubenswrapper[33013]: I0313 10:56:57.830107 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6xlb\" (UniqueName: \"kubernetes.io/projected/4d5479f3-51ec-4b93-8188-21cdda44828d-kube-api-access-j6xlb\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:56:57.830156 master-0 kubenswrapper[33013]: I0313 10:56:57.830128 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-default-certificate\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:56:57.830156 master-0 kubenswrapper[33013]: I0313 10:56:57.830137 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:56:57.830156 master-0 kubenswrapper[33013]: I0313 10:56:57.830147 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/d9075a44-22d3-4562-819e-d5a92f013663-etc-tuned\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.830267 master-0 kubenswrapper[33013]: I0313 10:56:57.830253 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/d9075a44-22d3-4562-819e-d5a92f013663-etc-tuned\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.830325 master-0 kubenswrapper[33013]: I0313 10:56:57.830286 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-var-lock\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:57.830325 master-0 kubenswrapper[33013]: I0313 10:56:57.830313 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56qz6\" (UniqueName: \"kubernetes.io/projected/79bb87a4-8834-4c73-834e-356ccc1f7f9b-kube-api-access-56qz6\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:56:57.830387 master-0 kubenswrapper[33013]: I0313 10:56:57.830342 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg69z\" (UniqueName: \"kubernetes.io/projected/1c12a5d5-711f-4663-974c-c4b06e15fc39-kube-api-access-cg69z\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:56:57.830502 master-0 kubenswrapper[33013]: I0313 10:56:57.830473 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:56:57.830541 master-0 kubenswrapper[33013]: I0313 10:56:57.830502 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:56:57.830576 master-0 kubenswrapper[33013]: I0313 10:56:57.830539 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-proxy-tls\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:56:57.830739 master-0 kubenswrapper[33013]: I0313 10:56:57.830712 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grplv\" (UniqueName: \"kubernetes.io/projected/574bf255-14b3-40af-b240-2d3abd5b86b8-kube-api-access-grplv\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:56:57.830781 master-0 kubenswrapper[33013]: I0313 10:56:57.830741 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-var-lib-kubelet\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.831242 master-0 kubenswrapper[33013]: I0313 10:56:57.831212 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-cni-binary-copy\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.831296 master-0 kubenswrapper[33013]: I0313 10:56:57.831254 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-bin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.831296 master-0 kubenswrapper[33013]: I0313 10:56:57.831281 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-conf-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.831386 master-0 kubenswrapper[33013]: I0313 10:56:57.831304 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e485e709-32ba-442b-98e5-b4073516c0ab-hosts-file\") pod \"node-resolver-tfwn8\" (UID: \"e485e709-32ba-442b-98e5-b4073516c0ab\") " pod="openshift-dns/node-resolver-tfwn8" Mar 13 10:56:57.831386 master-0 kubenswrapper[33013]: I0313 10:56:57.831330 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbdwm\" (UniqueName: \"kubernetes.io/projected/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-kube-api-access-qbdwm\") pod \"control-plane-machine-set-operator-6686554ddc-hszft\" (UID: \"484e6d0b-d057-4658-8e49-bbe7e6f6ee86\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" Mar 13 10:56:57.831386 master-0 kubenswrapper[33013]: I0313 10:56:57.831358 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac1a605-d2d5-4004-96f5-121c20555bde-serving-cert\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:56:57.831489 master-0 kubenswrapper[33013]: I0313 10:56:57.831385 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cpdn\" (UniqueName: \"kubernetes.io/projected/c455a959-d764-4b4f-a1e0-95c27495dd9d-kube-api-access-2cpdn\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:56:57.831489 master-0 kubenswrapper[33013]: I0313 10:56:57.831425 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-serving-cert\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.831489 master-0 kubenswrapper[33013]: I0313 10:56:57.831442 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-cni-binary-copy\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.831489 master-0 kubenswrapper[33013]: I0313 10:56:57.831448 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-trusted-ca-bundle\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.831489 master-0 kubenswrapper[33013]: I0313 10:56:57.831481 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2znn\" (UniqueName: \"kubernetes.io/projected/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-kube-api-access-s2znn\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:56:57.831697 master-0 kubenswrapper[33013]: I0313 10:56:57.831503 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.831697 master-0 kubenswrapper[33013]: I0313 10:56:57.831527 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.831697 master-0 kubenswrapper[33013]: I0313 10:56:57.831553 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpdlr\" (UniqueName: \"kubernetes.io/projected/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-kube-api-access-lpdlr\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:56:57.831697 master-0 kubenswrapper[33013]: I0313 10:56:57.831580 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btws6\" (UniqueName: \"kubernetes.io/projected/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-kube-api-access-btws6\") pod \"ingress-canary-dxhl9\" (UID: \"05a72a4c-5ce8-49d1-8e4f-334f63d4e987\") " pod="openshift-ingress-canary/ingress-canary-dxhl9" Mar 13 10:56:57.831697 master-0 kubenswrapper[33013]: I0313 10:56:57.831625 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-trusted-ca-bundle\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:57.831908 master-0 kubenswrapper[33013]: I0313 10:56:57.831699 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/257a4a8b-014c-4473-80a0-e95cf6d41bf1-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:56:57.831908 master-0 kubenswrapper[33013]: I0313 10:56:57.831736 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b4d53c-af72-44c8-9605-271445f95f87-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:56:57.831908 master-0 kubenswrapper[33013]: I0313 10:56:57.831800 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:56:57.831908 master-0 kubenswrapper[33013]: I0313 10:56:57.831826 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:56:57.831908 master-0 kubenswrapper[33013]: I0313 10:56:57.831845 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp847\" (UniqueName: \"kubernetes.io/projected/9da11462-a91d-4d02-8614-78b4c5b2f7e2-kube-api-access-hp847\") pod \"cluster-storage-operator-6fbfc8dc8f-fdt9m\" (UID: \"9da11462-a91d-4d02-8614-78b4c5b2f7e2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" Mar 13 10:56:57.831908 master-0 kubenswrapper[33013]: I0313 10:56:57.831875 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-config\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:56:57.831908 master-0 kubenswrapper[33013]: I0313 10:56:57.831895 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-bin\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.831908 master-0 kubenswrapper[33013]: I0313 10:56:57.831915 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:56:57.832165 master-0 kubenswrapper[33013]: I0313 10:56:57.831934 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh6kl\" (UniqueName: \"kubernetes.io/projected/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-kube-api-access-gh6kl\") pod \"multus-admission-controller-7769569c45-rshw5\" (UID: \"14f6e3b2-716c-4392-b3c8-75b2168ccfb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" Mar 13 10:56:57.832165 master-0 kubenswrapper[33013]: I0313 10:56:57.831950 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-script-lib\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.832165 master-0 kubenswrapper[33013]: I0313 10:56:57.832071 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b4d53c-af72-44c8-9605-271445f95f87-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:56:57.832165 master-0 kubenswrapper[33013]: I0313 10:56:57.832163 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq9dl\" (UniqueName: \"kubernetes.io/projected/b12e76f4-b960-4534-90e6-a2cdbecd1728-kube-api-access-xq9dl\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:56:57.832280 master-0 kubenswrapper[33013]: I0313 10:56:57.832185 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp6pp\" (UniqueName: \"kubernetes.io/projected/8a305f45-8689-45a8-8c8b-5954f2c863df-kube-api-access-zp6pp\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:56:57.832280 master-0 kubenswrapper[33013]: I0313 10:56:57.832237 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:56:57.832280 master-0 kubenswrapper[33013]: I0313 10:56:57.832257 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:56:57.832280 master-0 kubenswrapper[33013]: I0313 10:56:57.832276 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfxm5\" (UniqueName: \"kubernetes.io/projected/86774fd7-7c26-4b41-badb-de1004397637-kube-api-access-tfxm5\") pod \"cluster-samples-operator-664cb58b85-mq7rm\" (UID: \"86774fd7-7c26-4b41-badb-de1004397637\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" Mar 13 10:56:57.832423 master-0 kubenswrapper[33013]: I0313 10:56:57.832284 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-script-lib\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.832423 master-0 kubenswrapper[33013]: I0313 10:56:57.832275 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:56:57.832423 master-0 kubenswrapper[33013]: I0313 10:56:57.832380 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-hostroot\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.832423 master-0 kubenswrapper[33013]: I0313 10:56:57.832413 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1edde4bf-4554-4ab2-b588-513ad84a9bae-tmpfs\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:56:57.832556 master-0 kubenswrapper[33013]: I0313 10:56:57.832480 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-federate-client-tls\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:57.832556 master-0 kubenswrapper[33013]: I0313 10:56:57.832513 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-webhook-certs\") pod \"multus-admission-controller-7769569c45-rshw5\" (UID: \"14f6e3b2-716c-4392-b3c8-75b2168ccfb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" Mar 13 10:56:57.832556 master-0 kubenswrapper[33013]: I0313 10:56:57.832528 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1edde4bf-4554-4ab2-b588-513ad84a9bae-tmpfs\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:56:57.832556 master-0 kubenswrapper[33013]: I0313 10:56:57.832532 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxvqn\" (UniqueName: \"kubernetes.io/projected/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-kube-api-access-vxvqn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.832742 master-0 kubenswrapper[33013]: I0313 10:56:57.832642 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6ed47c57-533f-43e4-88eb-07da29b4878f-available-featuregates\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:56:57.832742 master-0 kubenswrapper[33013]: I0313 10:56:57.832672 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-sys\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.832742 master-0 kubenswrapper[33013]: I0313 10:56:57.832643 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:56:57.832742 master-0 kubenswrapper[33013]: I0313 10:56:57.832694 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzxzs\" (UniqueName: \"kubernetes.io/projected/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-kube-api-access-dzxzs\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:56:57.832742 master-0 kubenswrapper[33013]: I0313 10:56:57.832714 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-socket-dir-parent\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.832982 master-0 kubenswrapper[33013]: I0313 10:56:57.832759 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6ed47c57-533f-43e4-88eb-07da29b4878f-available-featuregates\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:56:57.832982 master-0 kubenswrapper[33013]: I0313 10:56:57.832779 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-cert\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:56:57.832982 master-0 kubenswrapper[33013]: I0313 10:56:57.832801 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-audit\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.832982 master-0 kubenswrapper[33013]: I0313 10:56:57.832955 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwfzq\" (UniqueName: \"kubernetes.io/projected/c87545aa-11c2-4e6e-8c13-16eeff3be83b-kube-api-access-pwfzq\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:56:57.833130 master-0 kubenswrapper[33013]: I0313 10:56:57.832995 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-webhook-cert\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:56:57.833130 master-0 kubenswrapper[33013]: I0313 10:56:57.833020 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:56:57.833130 master-0 kubenswrapper[33013]: I0313 10:56:57.833038 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-netns\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.833221 master-0 kubenswrapper[33013]: I0313 10:56:57.833158 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:56:57.833221 master-0 kubenswrapper[33013]: I0313 10:56:57.833184 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5448b59a-b731-45a3-9ded-d25315f597fb-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:56:57.833374 master-0 kubenswrapper[33013]: I0313 10:56:57.833347 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7667717b-fb74-456b-8615-16475cb69e98-trusted-ca\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:56:57.833412 master-0 kubenswrapper[33013]: I0313 10:56:57.833377 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5rht\" (UniqueName: \"kubernetes.io/projected/b8d40b37-0f3d-4531-9fa8-eda965d2337d-kube-api-access-l5rht\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:56:57.833412 master-0 kubenswrapper[33013]: I0313 10:56:57.833395 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-utilities\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:56:57.833474 master-0 kubenswrapper[33013]: I0313 10:56:57.833416 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j25nl\" (UniqueName: \"kubernetes.io/projected/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-kube-api-access-j25nl\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:56:57.833474 master-0 kubenswrapper[33013]: I0313 10:56:57.833434 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-textfile\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.833835 master-0 kubenswrapper[33013]: I0313 10:56:57.833687 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48nns\" (UniqueName: \"kubernetes.io/projected/5b796628-a6ca-4d5c-9870-0ca60b9372aa-kube-api-access-48nns\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.833944 master-0 kubenswrapper[33013]: I0313 10:56:57.833704 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:56:57.834001 master-0 kubenswrapper[33013]: I0313 10:56:57.833919 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa507cf-017d-44f5-8662-77547f82fb51-catalog-content\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:56:57.834078 master-0 kubenswrapper[33013]: I0313 10:56:57.833803 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-utilities\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:56:57.834113 master-0 kubenswrapper[33013]: I0313 10:56:57.834054 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:56:57.834181 master-0 kubenswrapper[33013]: I0313 10:56:57.834066 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa507cf-017d-44f5-8662-77547f82fb51-catalog-content\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:56:57.834218 master-0 kubenswrapper[33013]: I0313 10:56:57.833727 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-textfile\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.834248 master-0 kubenswrapper[33013]: I0313 10:56:57.834203 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:56:57.834357 master-0 kubenswrapper[33013]: I0313 10:56:57.834322 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b12e76f4-b960-4534-90e6-a2cdbecd1728-host-slash\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:56:57.834427 master-0 kubenswrapper[33013]: I0313 10:56:57.834376 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.834427 master-0 kubenswrapper[33013]: I0313 10:56:57.834389 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7667717b-fb74-456b-8615-16475cb69e98-trusted-ca\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:56:57.834513 master-0 kubenswrapper[33013]: I0313 10:56:57.834430 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwqp6\" (UniqueName: \"kubernetes.io/projected/549bd192-0235-4994-b485-f1b10d16f6b5-kube-api-access-pwqp6\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:56:57.834513 master-0 kubenswrapper[33013]: I0313 10:56:57.834496 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:56:57.834575 master-0 kubenswrapper[33013]: I0313 10:56:57.834512 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rfpp\" (UniqueName: \"kubernetes.io/projected/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-kube-api-access-8rfpp\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:56:57.834645 master-0 kubenswrapper[33013]: I0313 10:56:57.834613 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.834866 master-0 kubenswrapper[33013]: I0313 10:56:57.834621 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec3168fc-6c8f-4603-94e0-17b1ae22a802-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:56:57.834916 master-0 kubenswrapper[33013]: I0313 10:56:57.834884 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk4sg\" (UniqueName: \"kubernetes.io/projected/f87662b9-6ac6-44f3-8a16-ff858c2baa91-kube-api-access-zk4sg\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:56:57.834982 master-0 kubenswrapper[33013]: I0313 10:56:57.834949 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgb25\" (UniqueName: \"kubernetes.io/projected/11927952-723f-4d6d-922b-73139abe8877-kube-api-access-kgb25\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:56:57.835024 master-0 kubenswrapper[33013]: I0313 10:56:57.834989 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec3168fc-6c8f-4603-94e0-17b1ae22a802-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:56:57.835024 master-0 kubenswrapper[33013]: I0313 10:56:57.835004 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysctl-d\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.835087 master-0 kubenswrapper[33013]: I0313 10:56:57.835050 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-metrics-server-audit-profiles\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:56:57.835154 master-0 kubenswrapper[33013]: I0313 10:56:57.835116 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/549bd192-0235-4994-b485-f1b10d16f6b5-signing-key\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:56:57.835227 master-0 kubenswrapper[33013]: I0313 10:56:57.835198 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-serving-cert\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:56:57.835290 master-0 kubenswrapper[33013]: I0313 10:56:57.835264 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26dtr\" (UniqueName: \"kubernetes.io/projected/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-kube-api-access-26dtr\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:56:57.835334 master-0 kubenswrapper[33013]: I0313 10:56:57.835312 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r657p\" (UniqueName: \"kubernetes.io/projected/2195f7be-b41e-4ae2-b737-d5782e0d41a8-kube-api-access-r657p\") pod \"network-check-source-7c67b67d47-jbx9v\" (UID: \"2195f7be-b41e-4ae2-b737-d5782e0d41a8\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-jbx9v" Mar 13 10:56:57.835389 master-0 kubenswrapper[33013]: I0313 10:56:57.835365 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrlx\" (UniqueName: \"kubernetes.io/projected/866cf034-8fd8-4f16-8e9b-68627228aa8d-kube-api-access-mnrlx\") pod \"csi-snapshot-controller-operator-5685fbc7d-mfvmx\" (UID: \"866cf034-8fd8-4f16-8e9b-68627228aa8d\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx" Mar 13 10:56:57.835464 master-0 kubenswrapper[33013]: I0313 10:56:57.835440 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/549bd192-0235-4994-b485-f1b10d16f6b5-signing-key\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:56:57.835464 master-0 kubenswrapper[33013]: I0313 10:56:57.835441 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-catalog-content\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:56:57.835565 master-0 kubenswrapper[33013]: I0313 10:56:57.835489 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdb2x\" (UniqueName: \"kubernetes.io/projected/2a05e72d-836f-40e0-8a5c-ee02dce494b3-kube-api-access-qdb2x\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:56:57.835565 master-0 kubenswrapper[33013]: I0313 10:56:57.835508 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:56:57.835565 master-0 kubenswrapper[33013]: I0313 10:56:57.835527 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-config\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:56:57.835565 master-0 kubenswrapper[33013]: I0313 10:56:57.835557 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-serving-cert\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:56:57.835865 master-0 kubenswrapper[33013]: I0313 10:56:57.835653 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-catalog-content\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:56:57.835865 master-0 kubenswrapper[33013]: I0313 10:56:57.835724 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9db15a-8854-485b-9863-9cbe5dddd977-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:56:57.835865 master-0 kubenswrapper[33013]: I0313 10:56:57.835764 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-config\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:56:57.835865 master-0 kubenswrapper[33013]: I0313 10:56:57.835765 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d72d950-cfb4-4ed5-9ad6-f7266b937493-node-pullsecrets\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.835865 master-0 kubenswrapper[33013]: I0313 10:56:57.835809 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-etcd-client\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.835865 master-0 kubenswrapper[33013]: I0313 10:56:57.835832 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:56:57.835865 master-0 kubenswrapper[33013]: I0313 10:56:57.835855 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5hq9\" (UniqueName: \"kubernetes.io/projected/b68ed803-45e2-42f1-99b1-33cf59b01d74-kube-api-access-q5hq9\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:56:57.836069 master-0 kubenswrapper[33013]: I0313 10:56:57.835874 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:57.836069 master-0 kubenswrapper[33013]: I0313 10:56:57.835925 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ed47c57-533f-43e4-88eb-07da29b4878f-serving-cert\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:56:57.836069 master-0 kubenswrapper[33013]: I0313 10:56:57.835965 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:56:57.836069 master-0 kubenswrapper[33013]: I0313 10:56:57.835997 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:57.836069 master-0 kubenswrapper[33013]: I0313 10:56:57.836027 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9db15a-8854-485b-9863-9cbe5dddd977-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:56:57.836241 master-0 kubenswrapper[33013]: I0313 10:56:57.836076 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b10584c2-ef04-4649-bcb6-9222c9530c3f-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:56:57.836241 master-0 kubenswrapper[33013]: I0313 10:56:57.836134 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-system-cni-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.836241 master-0 kubenswrapper[33013]: I0313 10:56:57.836163 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.836241 master-0 kubenswrapper[33013]: I0313 10:56:57.836175 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a305f45-8689-45a8-8c8b-5954f2c863df-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:56:57.836241 master-0 kubenswrapper[33013]: I0313 10:56:57.836189 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ed47c57-533f-43e4-88eb-07da29b4878f-serving-cert\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:56:57.836241 master-0 kubenswrapper[33013]: I0313 10:56:57.836215 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d84xk\" (UniqueName: \"kubernetes.io/projected/2afe3890-e844-4dd3-ba49-3ac9178549bf-kube-api-access-d84xk\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:56:57.836402 master-0 kubenswrapper[33013]: I0313 10:56:57.836266 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-root\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.836402 master-0 kubenswrapper[33013]: I0313 10:56:57.836323 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt6sd\" (UniqueName: \"kubernetes.io/projected/5aa507cf-017d-44f5-8662-77547f82fb51-kube-api-access-jt6sd\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:56:57.836402 master-0 kubenswrapper[33013]: I0313 10:56:57.836345 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-system-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.836402 master-0 kubenswrapper[33013]: I0313 10:56:57.836363 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-serving-certs-ca-bundle\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:57.836506 master-0 kubenswrapper[33013]: I0313 10:56:57.836404 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a05e72d-836f-40e0-8a5c-ee02dce494b3-catalog-content\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:56:57.836536 master-0 kubenswrapper[33013]: I0313 10:56:57.836511 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-systemd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.836628 master-0 kubenswrapper[33013]: I0313 10:56:57.836569 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k8rp\" (UniqueName: \"kubernetes.io/projected/d288e5d0-0976-477f-be14-b3d5828e0482-kube-api-access-5k8rp\") pod \"migrator-57ccdf9b5-fgvbv\" (UID: \"d288e5d0-0976-477f-be14-b3d5828e0482\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv" Mar 13 10:56:57.836678 master-0 kubenswrapper[33013]: I0313 10:56:57.836622 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a05e72d-836f-40e0-8a5c-ee02dce494b3-catalog-content\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:56:57.836678 master-0 kubenswrapper[33013]: I0313 10:56:57.836653 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:56:57.836743 master-0 kubenswrapper[33013]: I0313 10:56:57.836693 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b68ed803-45e2-42f1-99b1-33cf59b01d74-audit-log\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:56:57.836772 master-0 kubenswrapper[33013]: I0313 10:56:57.836747 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b68ed803-45e2-42f1-99b1-33cf59b01d74-audit-log\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:56:57.836772 master-0 kubenswrapper[33013]: I0313 10:56:57.836750 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-etcd-serving-ca\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:57.836832 master-0 kubenswrapper[33013]: I0313 10:56:57.836787 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/549bd192-0235-4994-b485-f1b10d16f6b5-signing-cabundle\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:56:57.836832 master-0 kubenswrapper[33013]: I0313 10:56:57.836805 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:56:57.836886 master-0 kubenswrapper[33013]: I0313 10:56:57.836857 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjvtr\" (UniqueName: \"kubernetes.io/projected/9aa4b44d-f202-4670-afab-44b38960026f-kube-api-access-bjvtr\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.836916 master-0 kubenswrapper[33013]: I0313 10:56:57.836886 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22bwx\" (UniqueName: \"kubernetes.io/projected/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-kube-api-access-22bwx\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:56:57.836946 master-0 kubenswrapper[33013]: I0313 10:56:57.836902 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-etcd-serving-ca\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.836946 master-0 kubenswrapper[33013]: I0313 10:56:57.836938 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:56:57.837004 master-0 kubenswrapper[33013]: I0313 10:56:57.836956 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqjkf\" (UniqueName: \"kubernetes.io/projected/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-kube-api-access-qqjkf\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:56:57.837004 master-0 kubenswrapper[33013]: I0313 10:56:57.836974 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:56:57.837004 master-0 kubenswrapper[33013]: I0313 10:56:57.836995 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:56:57.837111 master-0 kubenswrapper[33013]: I0313 10:56:57.837087 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:56:57.837151 master-0 kubenswrapper[33013]: I0313 10:56:57.837091 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:56:57.837151 master-0 kubenswrapper[33013]: I0313 10:56:57.837139 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-etc-kubernetes\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.837213 master-0 kubenswrapper[33013]: I0313 10:56:57.837162 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:56:57.837213 master-0 kubenswrapper[33013]: I0313 10:56:57.837184 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d72d950-cfb4-4ed5-9ad6-f7266b937493-audit-dir\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.837273 master-0 kubenswrapper[33013]: I0313 10:56:57.837210 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8q5s\" (UniqueName: \"kubernetes.io/projected/beee81ef-5a3a-4df2-85d5-2573679d261f-kube-api-access-f8q5s\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:56:57.837273 master-0 kubenswrapper[33013]: I0313 10:56:57.837235 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:56:57.837273 master-0 kubenswrapper[33013]: I0313 10:56:57.837255 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5656\" (UniqueName: \"kubernetes.io/projected/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-kube-api-access-f5656\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:56:57.837354 master-0 kubenswrapper[33013]: I0313 10:56:57.837274 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8kvd\" (UniqueName: \"kubernetes.io/projected/5448b59a-b731-45a3-9ded-d25315f597fb-kube-api-access-d8kvd\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:56:57.837354 master-0 kubenswrapper[33013]: I0313 10:56:57.837297 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:56:57.837354 master-0 kubenswrapper[33013]: I0313 10:56:57.837300 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:56:57.837354 master-0 kubenswrapper[33013]: I0313 10:56:57.837322 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:56:57.837466 master-0 kubenswrapper[33013]: I0313 10:56:57.837406 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt7hs\" (UniqueName: \"kubernetes.io/projected/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-kube-api-access-bt7hs\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:56:57.837466 master-0 kubenswrapper[33013]: I0313 10:56:57.837445 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd2mn\" (UniqueName: \"kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-kube-api-access-qd2mn\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:56:57.837466 master-0 kubenswrapper[33013]: I0313 10:56:57.837449 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:56:57.837606 master-0 kubenswrapper[33013]: I0313 10:56:57.837557 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-images\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:56:57.837652 master-0 kubenswrapper[33013]: I0313 10:56:57.837613 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:56:57.837652 master-0 kubenswrapper[33013]: I0313 10:56:57.837637 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c4rc\" (UniqueName: \"kubernetes.io/projected/3ff2ab1c-7057-4e18-8e32-68807f86532a-kube-api-access-8c4rc\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:56:57.837713 master-0 kubenswrapper[33013]: I0313 10:56:57.837656 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-metrics-certs\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:56:57.837769 master-0 kubenswrapper[33013]: I0313 10:56:57.837752 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa507cf-017d-44f5-8662-77547f82fb51-utilities\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:56:57.837802 master-0 kubenswrapper[33013]: I0313 10:56:57.837788 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:56:57.837832 master-0 kubenswrapper[33013]: I0313 10:56:57.837815 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysconfig\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.837862 master-0 kubenswrapper[33013]: I0313 10:56:57.837846 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:56:57.837892 master-0 kubenswrapper[33013]: I0313 10:56:57.837861 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa507cf-017d-44f5-8662-77547f82fb51-utilities\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:56:57.837979 master-0 kubenswrapper[33013]: I0313 10:56:57.837960 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-config\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.838013 master-0 kubenswrapper[33013]: I0313 10:56:57.837986 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11927952-723f-4d6d-922b-73139abe8877-metrics-tls\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:56:57.838013 master-0 kubenswrapper[33013]: I0313 10:56:57.838005 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/48f99840-4d9e-49c5-819e-0bb15493feb5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:56:57.838075 master-0 kubenswrapper[33013]: I0313 10:56:57.838022 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9db15a-8854-485b-9863-9cbe5dddd977-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:56:57.838075 master-0 kubenswrapper[33013]: I0313 10:56:57.838046 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:56:57.838075 master-0 kubenswrapper[33013]: I0313 10:56:57.838068 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-slash\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.838185 master-0 kubenswrapper[33013]: I0313 10:56:57.838148 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:56:57.838242 master-0 kubenswrapper[33013]: I0313 10:56:57.838163 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-config\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:56:57.838275 master-0 kubenswrapper[33013]: I0313 10:56:57.838233 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovnkube-config\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.838305 master-0 kubenswrapper[33013]: I0313 10:56:57.838276 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-ovnkube-identity-cm\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:56:57.838342 master-0 kubenswrapper[33013]: I0313 10:56:57.838304 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-config\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:56:57.838342 master-0 kubenswrapper[33013]: I0313 10:56:57.838317 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11927952-723f-4d6d-922b-73139abe8877-config-volume\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:56:57.838402 master-0 kubenswrapper[33013]: I0313 10:56:57.838322 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ff2ab1c-7057-4e18-8e32-68807f86532a-metrics-tls\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:56:57.838463 master-0 kubenswrapper[33013]: I0313 10:56:57.838444 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-client-certs\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:56:57.838500 master-0 kubenswrapper[33013]: I0313 10:56:57.838483 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-systemd-units\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.838528 master-0 kubenswrapper[33013]: I0313 10:56:57.838506 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-audit-dir\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:57.838661 master-0 kubenswrapper[33013]: I0313 10:56:57.838619 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:56:57.838815 master-0 kubenswrapper[33013]: I0313 10:56:57.838633 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-ovnkube-identity-cm\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:56:57.838853 master-0 kubenswrapper[33013]: I0313 10:56:57.838813 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:56:57.838916 master-0 kubenswrapper[33013]: I0313 10:56:57.838839 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79bb87a4-8834-4c73-834e-356ccc1f7f9b-metrics-certs\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:56:57.838916 master-0 kubenswrapper[33013]: I0313 10:56:57.838860 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:56:57.838916 master-0 kubenswrapper[33013]: I0313 10:56:57.838909 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdg6f\" (UniqueName: \"kubernetes.io/projected/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-api-access-qdg6f\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:56:57.839027 master-0 kubenswrapper[33013]: I0313 10:56:57.838957 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9db15a-8854-485b-9863-9cbe5dddd977-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:56:57.839027 master-0 kubenswrapper[33013]: I0313 10:56:57.838996 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c87545aa-11c2-4e6e-8c13-16eeff3be83b-serving-cert\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:56:57.839133 master-0 kubenswrapper[33013]: I0313 10:56:57.839034 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvvhh\" (UniqueName: \"kubernetes.io/projected/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-kube-api-access-rvvhh\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:56:57.839133 master-0 kubenswrapper[33013]: I0313 10:56:57.839079 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:56:57.839133 master-0 kubenswrapper[33013]: I0313 10:56:57.839089 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1c12a5d5-711f-4663-974c-c4b06e15fc39-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:56:57.839133 master-0 kubenswrapper[33013]: I0313 10:56:57.839110 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b4d53c-af72-44c8-9605-271445f95f87-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:56:57.839290 master-0 kubenswrapper[33013]: I0313 10:56:57.839128 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovn-node-metrics-cert\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.839290 master-0 kubenswrapper[33013]: I0313 10:56:57.839177 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqg6g\" (UniqueName: \"kubernetes.io/projected/6622be09-206e-4d02-90ca-6d9f2fc852aa-kube-api-access-lqg6g\") pod \"csi-snapshot-controller-7577d6f48-cbhxt\" (UID: \"6622be09-206e-4d02-90ca-6d9f2fc852aa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" Mar 13 10:56:57.839290 master-0 kubenswrapper[33013]: I0313 10:56:57.839213 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-apiservice-cert\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:56:57.839290 master-0 kubenswrapper[33013]: I0313 10:56:57.839247 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-config\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:56:57.839290 master-0 kubenswrapper[33013]: I0313 10:56:57.839276 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:56:57.839488 master-0 kubenswrapper[33013]: I0313 10:56:57.839314 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-certs\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:56:57.839488 master-0 kubenswrapper[33013]: I0313 10:56:57.839337 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-kubelet\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.839488 master-0 kubenswrapper[33013]: I0313 10:56:57.839359 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxkl8\" (UniqueName: \"kubernetes.io/projected/1edde4bf-4554-4ab2-b588-513ad84a9bae-kube-api-access-kxkl8\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:56:57.839488 master-0 kubenswrapper[33013]: I0313 10:56:57.839368 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-ovn-node-metrics-cert\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.839488 master-0 kubenswrapper[33013]: I0313 10:56:57.839379 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b10584c2-ef04-4649-bcb6-9222c9530c3f-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:56:57.839488 master-0 kubenswrapper[33013]: I0313 10:56:57.839425 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:56:57.839488 master-0 kubenswrapper[33013]: I0313 10:56:57.839480 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec3168fc-6c8f-4603-94e0-17b1ae22a802-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:56:57.839786 master-0 kubenswrapper[33013]: I0313 10:56:57.839533 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-k8s-cni-cncf-io\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.839786 master-0 kubenswrapper[33013]: I0313 10:56:57.839607 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c87545aa-11c2-4e6e-8c13-16eeff3be83b-snapshots\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:56:57.839786 master-0 kubenswrapper[33013]: I0313 10:56:57.839646 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-cnibin\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.839786 master-0 kubenswrapper[33013]: I0313 10:56:57.839645 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec3168fc-6c8f-4603-94e0-17b1ae22a802-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:56:57.839786 master-0 kubenswrapper[33013]: I0313 10:56:57.839702 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-binary-copy\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.839786 master-0 kubenswrapper[33013]: I0313 10:56:57.839751 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9db15a-8854-485b-9863-9cbe5dddd977-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:56:57.839786 master-0 kubenswrapper[33013]: I0313 10:56:57.839773 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c87545aa-11c2-4e6e-8c13-16eeff3be83b-snapshots\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:56:57.840065 master-0 kubenswrapper[33013]: I0313 10:56:57.839752 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-client\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:56:57.840065 master-0 kubenswrapper[33013]: I0313 10:56:57.839918 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-modprobe-d\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.840065 master-0 kubenswrapper[33013]: I0313 10:56:57.839937 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysctl-conf\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.840065 master-0 kubenswrapper[33013]: I0313 10:56:57.839942 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-client\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:56:57.840065 master-0 kubenswrapper[33013]: I0313 10:56:57.839964 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn8w5\" (UniqueName: \"kubernetes.io/projected/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-kube-api-access-gn8w5\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:56:57.840065 master-0 kubenswrapper[33013]: I0313 10:56:57.839965 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-binary-copy\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.840065 master-0 kubenswrapper[33013]: I0313 10:56:57.840022 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-encryption-config\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.840332 master-0 kubenswrapper[33013]: I0313 10:56:57.840060 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 10:56:57.840332 master-0 kubenswrapper[33013]: I0313 10:56:57.840128 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:56:57.840332 master-0 kubenswrapper[33013]: I0313 10:56:57.840178 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:57.840332 master-0 kubenswrapper[33013]: I0313 10:56:57.840222 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcb99\" (UniqueName: \"kubernetes.io/projected/c09f42db-e6d7-469d-9761-88a879f6aa6b-kube-api-access-mcb99\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:56:57.840488 master-0 kubenswrapper[33013]: I0313 10:56:57.840380 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f87662b9-6ac6-44f3-8a16-ff858c2baa91-webhook-cert\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:56:57.841376 master-0 kubenswrapper[33013]: I0313 10:56:57.841341 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:56:57.841453 master-0 kubenswrapper[33013]: I0313 10:56:57.841423 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-image-import-ca\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.841507 master-0 kubenswrapper[33013]: I0313 10:56:57.841459 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-telemeter-client-tls\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:57.841507 master-0 kubenswrapper[33013]: I0313 10:56:57.841493 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.841610 master-0 kubenswrapper[33013]: I0313 10:56:57.841521 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec3168fc-6c8f-4603-94e0-17b1ae22a802-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:56:57.841610 master-0 kubenswrapper[33013]: I0313 10:56:57.841551 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0ac1a605-d2d5-4004-96f5-121c20555bde-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:56:57.841610 master-0 kubenswrapper[33013]: I0313 10:56:57.841577 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-os-release\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.841726 master-0 kubenswrapper[33013]: I0313 10:56:57.841621 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-var-lib-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.841726 master-0 kubenswrapper[33013]: I0313 10:56:57.841653 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-hszft\" (UID: \"484e6d0b-d057-4658-8e49-bbe7e6f6ee86\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" Mar 13 10:56:57.841726 master-0 kubenswrapper[33013]: I0313 10:56:57.841680 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:56:57.841726 master-0 kubenswrapper[33013]: I0313 10:56:57.841711 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/b8d40b37-0f3d-4531-9fa8-eda965d2337d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:56:57.841841 master-0 kubenswrapper[33013]: I0313 10:56:57.841803 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/b8d40b37-0f3d-4531-9fa8-eda965d2337d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:56:57.841908 master-0 kubenswrapper[33013]: I0313 10:56:57.841889 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:56:57.841942 master-0 kubenswrapper[33013]: I0313 10:56:57.841923 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knkb7\" (UniqueName: \"kubernetes.io/projected/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-kube-api-access-knkb7\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:56:57.841973 master-0 kubenswrapper[33013]: I0313 10:56:57.841953 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac1a605-d2d5-4004-96f5-121c20555bde-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:56:57.842083 master-0 kubenswrapper[33013]: I0313 10:56:57.842066 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzv5v\" (UniqueName: \"kubernetes.io/projected/257a4a8b-014c-4473-80a0-e95cf6d41bf1-kube-api-access-hzv5v\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:56:57.842115 master-0 kubenswrapper[33013]: I0313 10:56:57.842098 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-stats-auth\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:56:57.842148 master-0 kubenswrapper[33013]: I0313 10:56:57.842128 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.842176 master-0 kubenswrapper[33013]: I0313 10:56:57.842092 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:56:57.842176 master-0 kubenswrapper[33013]: I0313 10:56:57.842157 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/9da11462-a91d-4d02-8614-78b4c5b2f7e2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-fdt9m\" (UID: \"9da11462-a91d-4d02-8614-78b4c5b2f7e2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" Mar 13 10:56:57.842231 master-0 kubenswrapper[33013]: I0313 10:56:57.842192 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:56:57.842231 master-0 kubenswrapper[33013]: I0313 10:56:57.842212 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-node-bootstrap-token\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:56:57.842306 master-0 kubenswrapper[33013]: I0313 10:56:57.842232 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp2qn\" (UniqueName: \"kubernetes.io/projected/37b2e803-302b-4650-b18f-d3d2dd703bd5-kube-api-access-hp2qn\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:56:57.842306 master-0 kubenswrapper[33013]: I0313 10:56:57.842253 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-wtmp\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.842306 master-0 kubenswrapper[33013]: I0313 10:56:57.842271 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-cnibin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.842388 master-0 kubenswrapper[33013]: I0313 10:56:57.842308 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-os-release\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.842388 master-0 kubenswrapper[33013]: I0313 10:56:57.842316 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c455a959-d764-4b4f-a1e0-95c27495dd9d-srv-cert\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:56:57.842388 master-0 kubenswrapper[33013]: I0313 10:56:57.842345 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:56:57.842600 master-0 kubenswrapper[33013]: I0313 10:56:57.842387 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:56:57.842600 master-0 kubenswrapper[33013]: I0313 10:56:57.842440 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:56:57.842600 master-0 kubenswrapper[33013]: I0313 10:56:57.842481 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:56:57.842600 master-0 kubenswrapper[33013]: I0313 10:56:57.842553 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-cert\") pod \"ingress-canary-dxhl9\" (UID: \"05a72a4c-5ce8-49d1-8e4f-334f63d4e987\") " pod="openshift-ingress-canary/ingress-canary-dxhl9" Mar 13 10:56:57.842727 master-0 kubenswrapper[33013]: I0313 10:56:57.842611 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7667717b-fb74-456b-8615-16475cb69e98-metrics-tls\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:56:57.842727 master-0 kubenswrapper[33013]: I0313 10:56:57.842626 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9cbm\" (UniqueName: \"kubernetes.io/projected/1d72d950-cfb4-4ed5-9ad6-f7266b937493-kube-api-access-h9cbm\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.842727 master-0 kubenswrapper[33013]: I0313 10:56:57.842671 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a998af-4fc0-4078-a6a0-93dde6c00508-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:56:57.842817 master-0 kubenswrapper[33013]: I0313 10:56:57.842771 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:56:57.842817 master-0 kubenswrapper[33013]: I0313 10:56:57.842775 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:56:57.842870 master-0 kubenswrapper[33013]: I0313 10:56:57.842822 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b12e76f4-b960-4534-90e6-a2cdbecd1728-iptables-alerter-script\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:56:57.842870 master-0 kubenswrapper[33013]: I0313 10:56:57.842855 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-netd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.842926 master-0 kubenswrapper[33013]: I0313 10:56:57.842886 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-multus-daemon-config\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.842926 master-0 kubenswrapper[33013]: I0313 10:56:57.842915 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:56:57.842984 master-0 kubenswrapper[33013]: I0313 10:56:57.842943 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5vcv\" (UniqueName: \"kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-kube-api-access-m5vcv\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:56:57.842984 master-0 kubenswrapper[33013]: I0313 10:56:57.842970 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkdfn\" (UniqueName: \"kubernetes.io/projected/eb778c86-ea51-4eab-82b8-a8e0bec0f050-kube-api-access-hkdfn\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:56:57.843069 master-0 kubenswrapper[33013]: I0313 10:56:57.843038 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a998af-4fc0-4078-a6a0-93dde6c00508-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:56:57.843104 master-0 kubenswrapper[33013]: I0313 10:56:57.843056 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b12e76f4-b960-4534-90e6-a2cdbecd1728-iptables-alerter-script\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:56:57.843185 master-0 kubenswrapper[33013]: I0313 10:56:57.843166 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-rootfs\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:56:57.843217 master-0 kubenswrapper[33013]: I0313 10:56:57.843205 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:57.843248 master-0 kubenswrapper[33013]: I0313 10:56:57.843215 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9aa4b44d-f202-4670-afab-44b38960026f-multus-daemon-config\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.843248 master-0 kubenswrapper[33013]: I0313 10:56:57.843230 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-config\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:56:57.843394 master-0 kubenswrapper[33013]: I0313 10:56:57.843372 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjcjm\" (UniqueName: \"kubernetes.io/projected/42b4d53c-af72-44c8-9605-271445f95f87-kube-api-access-kjcjm\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:56:57.843427 master-0 kubenswrapper[33013]: I0313 10:56:57.843406 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-config\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:56:57.843460 master-0 kubenswrapper[33013]: I0313 10:56:57.843430 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:56:57.843493 master-0 kubenswrapper[33013]: I0313 10:56:57.843457 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-multus-certs\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.843493 master-0 kubenswrapper[33013]: I0313 10:56:57.843481 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/257a4a8b-014c-4473-80a0-e95cf6d41bf1-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:56:57.843553 master-0 kubenswrapper[33013]: I0313 10:56:57.843501 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:56:57.843553 master-0 kubenswrapper[33013]: I0313 10:56:57.843523 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4d5479f3-51ec-4b93-8188-21cdda44828d-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:56:57.843553 master-0 kubenswrapper[33013]: I0313 10:56:57.843541 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:56:57.843746 master-0 kubenswrapper[33013]: I0313 10:56:57.843565 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:56:57.843746 master-0 kubenswrapper[33013]: I0313 10:56:57.843602 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg9zz\" (UniqueName: \"kubernetes.io/projected/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-kube-api-access-xg9zz\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:56:57.843746 master-0 kubenswrapper[33013]: I0313 10:56:57.843623 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwc4l\" (UniqueName: \"kubernetes.io/projected/e485e709-32ba-442b-98e5-b4073516c0ab-kube-api-access-qwc4l\") pod \"node-resolver-tfwn8\" (UID: \"e485e709-32ba-442b-98e5-b4073516c0ab\") " pod="openshift-dns/node-resolver-tfwn8" Mar 13 10:56:57.843746 master-0 kubenswrapper[33013]: I0313 10:56:57.843635 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-config\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:56:57.843746 master-0 kubenswrapper[33013]: I0313 10:56:57.843642 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htqw9\" (UniqueName: \"kubernetes.io/projected/d9075a44-22d3-4562-819e-d5a92f013663-kube-api-access-htqw9\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.843746 master-0 kubenswrapper[33013]: I0313 10:56:57.843670 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beee81ef-5a3a-4df2-85d5-2573679d261f-catalog-content\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:56:57.843746 master-0 kubenswrapper[33013]: I0313 10:56:57.843696 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-service-ca-bundle\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:56:57.843746 master-0 kubenswrapper[33013]: I0313 10:56:57.843718 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.843746 master-0 kubenswrapper[33013]: I0313 10:56:57.843738 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6nnz\" (UniqueName: \"kubernetes.io/projected/5843b0d4-a538-4261-b425-598e318c9d07-kube-api-access-r6nnz\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.843998 master-0 kubenswrapper[33013]: I0313 10:56:57.843764 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-node-log\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.843998 master-0 kubenswrapper[33013]: I0313 10:56:57.843790 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b10584c2-ef04-4649-bcb6-9222c9530c3f-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:56:57.843998 master-0 kubenswrapper[33013]: I0313 10:56:57.843813 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsswm\" (UniqueName: \"kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-kube-api-access-zsswm\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:56:57.843998 master-0 kubenswrapper[33013]: I0313 10:56:57.843835 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:56:57.843998 master-0 kubenswrapper[33013]: I0313 10:56:57.843855 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-vkqtt\" (UID: \"a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt" Mar 13 10:56:57.843998 master-0 kubenswrapper[33013]: I0313 10:56:57.843876 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-config\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:56:57.843998 master-0 kubenswrapper[33013]: I0313 10:56:57.843875 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beee81ef-5a3a-4df2-85d5-2573679d261f-catalog-content\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:56:57.843998 master-0 kubenswrapper[33013]: I0313 10:56:57.843960 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/257a4a8b-014c-4473-80a0-e95cf6d41bf1-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:56:57.844252 master-0 kubenswrapper[33013]: I0313 10:56:57.844040 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5843b0d4-a538-4261-b425-598e318c9d07-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.844252 master-0 kubenswrapper[33013]: I0313 10:56:57.844073 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a05e72d-836f-40e0-8a5c-ee02dce494b3-utilities\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:56:57.844252 master-0 kubenswrapper[33013]: I0313 10:56:57.844120 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:56:57.844252 master-0 kubenswrapper[33013]: I0313 10:56:57.844139 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-mcd-auth-proxy-config\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:56:57.844252 master-0 kubenswrapper[33013]: I0313 10:56:57.844157 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-ca\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:56:57.844252 master-0 kubenswrapper[33013]: I0313 10:56:57.844178 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0ac1a605-d2d5-4004-96f5-121c20555bde-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:56:57.844252 master-0 kubenswrapper[33013]: I0313 10:56:57.844190 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4d5479f3-51ec-4b93-8188-21cdda44828d-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:56:57.844252 master-0 kubenswrapper[33013]: I0313 10:56:57.844199 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:57.844252 master-0 kubenswrapper[33013]: I0313 10:56:57.844220 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-env-overrides\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:56:57.844252 master-0 kubenswrapper[33013]: I0313 10:56:57.844237 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-host-etc-kube\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:56:57.844252 master-0 kubenswrapper[33013]: I0313 10:56:57.844255 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-host\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844275 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k4c5\" (UniqueName: \"kubernetes.io/projected/4df756f0-c6b6-4730-842a-7ee9227397ae-kube-api-access-8k4c5\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844293 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beee81ef-5a3a-4df2-85d5-2573679d261f-utilities\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844310 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-encryption-config\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844329 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844346 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/257a4a8b-014c-4473-80a0-e95cf6d41bf1-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844360 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d5479f3-51ec-4b93-8188-21cdda44828d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844368 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/257a4a8b-014c-4473-80a0-e95cf6d41bf1-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844387 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-tls\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844409 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844431 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-systemd\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844452 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844471 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-metrics-client-ca\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844489 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2q2f\" (UniqueName: \"kubernetes.io/projected/939a3da3-62e7-4376-853d-dc333465446c-kube-api-access-t2q2f\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844508 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-lib-modules\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844539 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-metrics-tls\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844540 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b10584c2-ef04-4649-bcb6-9222c9530c3f-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:56:57.844550 master-0 kubenswrapper[33013]: I0313 10:56:57.844557 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:56:57.845042 master-0 kubenswrapper[33013]: I0313 10:56:57.844574 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-sys\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.845042 master-0 kubenswrapper[33013]: I0313 10:56:57.844622 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdgld\" (UniqueName: \"kubernetes.io/projected/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-kube-api-access-tdgld\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:57.845042 master-0 kubenswrapper[33013]: I0313 10:56:57.844642 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-netns\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.845042 master-0 kubenswrapper[33013]: I0313 10:56:57.844660 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-kubelet\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.845042 master-0 kubenswrapper[33013]: I0313 10:56:57.844680 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:56:57.845042 master-0 kubenswrapper[33013]: I0313 10:56:57.844697 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb778c86-ea51-4eab-82b8-a8e0bec0f050-service-ca-bundle\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:56:57.845042 master-0 kubenswrapper[33013]: I0313 10:56:57.844718 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-etc-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.845042 master-0 kubenswrapper[33013]: I0313 10:56:57.844776 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2afe3890-e844-4dd3-ba49-3ac9178549bf-srv-cert\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:56:57.845042 master-0 kubenswrapper[33013]: I0313 10:56:57.844907 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5b796628-a6ca-4d5c-9870-0ca60b9372aa-metrics-client-ca\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.845042 master-0 kubenswrapper[33013]: I0313 10:56:57.844997 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-images\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:56:57.845042 master-0 kubenswrapper[33013]: I0313 10:56:57.845009 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-config\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:56:57.845042 master-0 kubenswrapper[33013]: I0313 10:56:57.845031 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d40b37-0f3d-4531-9fa8-eda965d2337d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:56:57.845361 master-0 kubenswrapper[33013]: I0313 10:56:57.845065 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-etcd-client\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:57.845361 master-0 kubenswrapper[33013]: I0313 10:56:57.845166 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f87662b9-6ac6-44f3-8a16-ff858c2baa91-env-overrides\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:56:57.845361 master-0 kubenswrapper[33013]: I0313 10:56:57.845197 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-images\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:56:57.845361 master-0 kubenswrapper[33013]: I0313 10:56:57.845167 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/574bf255-14b3-40af-b240-2d3abd5b86b8-etcd-ca\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:56:57.845361 master-0 kubenswrapper[33013]: I0313 10:56:57.845286 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a05e72d-836f-40e0-8a5c-ee02dce494b3-utilities\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:56:57.845361 master-0 kubenswrapper[33013]: I0313 10:56:57.845286 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8d40b37-0f3d-4531-9fa8-eda965d2337d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:56:57.845516 master-0 kubenswrapper[33013]: I0313 10:56:57.845418 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beee81ef-5a3a-4df2-85d5-2573679d261f-utilities\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:56:57.845516 master-0 kubenswrapper[33013]: I0313 10:56:57.845453 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlmhn\" (UniqueName: \"kubernetes.io/projected/070b85a0-f076-4750-aa00-dabba401dc75-kube-api-access-nlmhn\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:56:57.845620 master-0 kubenswrapper[33013]: I0313 10:56:57.845600 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-metrics-tls\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:56:57.845662 master-0 kubenswrapper[33013]: I0313 10:56:57.845608 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7v6s\" (UniqueName: \"kubernetes.io/projected/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-kube-api-access-m7v6s\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:56:57.845694 master-0 kubenswrapper[33013]: I0313 10:56:57.845662 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb5l4\" (UniqueName: \"kubernetes.io/projected/48f99840-4d9e-49c5-819e-0bb15493feb5-kube-api-access-mb5l4\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:56:57.845725 master-0 kubenswrapper[33013]: I0313 10:56:57.845706 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-kubernetes\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.845808 master-0 kubenswrapper[33013]: I0313 10:56:57.845791 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-bound-sa-token\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:56:57.845838 master-0 kubenswrapper[33013]: I0313 10:56:57.845821 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-config\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.845868 master-0 kubenswrapper[33013]: I0313 10:56:57.845859 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-log-socket\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.845898 master-0 kubenswrapper[33013]: I0313 10:56:57.845880 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-serving-cert\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:56:57.845931 master-0 kubenswrapper[33013]: I0313 10:56:57.845912 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-multus\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.845960 master-0 kubenswrapper[33013]: I0313 10:56:57.845936 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchrx\" (UniqueName: \"kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx\") pod \"network-check-target-96vwf\" (UID: \"803de28e-3b31-4ea2-9b97-87a733635a5c\") " pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:56:57.845960 master-0 kubenswrapper[33013]: I0313 10:56:57.845955 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-serving-cert\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:56:57.846141 master-0 kubenswrapper[33013]: I0313 10:56:57.846127 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-serving-cert\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:56:57.846182 master-0 kubenswrapper[33013]: I0313 10:56:57.846157 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ac1a605-d2d5-4004-96f5-121c20555bde-service-ca\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:56:57.846227 master-0 kubenswrapper[33013]: I0313 10:56:57.846214 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b2e803-302b-4650-b18f-d3d2dd703bd5-serving-cert\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:56:57.846280 master-0 kubenswrapper[33013]: I0313 10:56:57.846266 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:56:57.846312 master-0 kubenswrapper[33013]: I0313 10:56:57.846289 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-run\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.846312 master-0 kubenswrapper[33013]: I0313 10:56:57.846291 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/574bf255-14b3-40af-b240-2d3abd5b86b8-serving-cert\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:56:57.846312 master-0 kubenswrapper[33013]: I0313 10:56:57.846309 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:56:57.846525 master-0 kubenswrapper[33013]: I0313 10:56:57.846496 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:57.846555 master-0 kubenswrapper[33013]: I0313 10:56:57.846544 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:56:57.846604 master-0 kubenswrapper[33013]: I0313 10:56:57.846575 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/86774fd7-7c26-4b41-badb-de1004397637-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mq7rm\" (UID: \"86774fd7-7c26-4b41-badb-de1004397637\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" Mar 13 10:56:57.846689 master-0 kubenswrapper[33013]: I0313 10:56:57.846624 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b2e803-302b-4650-b18f-d3d2dd703bd5-serving-cert\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:56:57.846793 master-0 kubenswrapper[33013]: I0313 10:56:57.846764 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b2e803-302b-4650-b18f-d3d2dd703bd5-config\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:56:57.846909 master-0 kubenswrapper[33013]: I0313 10:56:57.846882 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a998af-4fc0-4078-a6a0-93dde6c00508-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:56:57.847083 master-0 kubenswrapper[33013]: I0313 10:56:57.847060 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b2e803-302b-4650-b18f-d3d2dd703bd5-config\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:56:57.847283 master-0 kubenswrapper[33013]: I0313 10:56:57.847262 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/549bd192-0235-4994-b485-f1b10d16f6b5-signing-cabundle\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:56:57.847369 master-0 kubenswrapper[33013]: I0313 10:56:57.847338 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a998af-4fc0-4078-a6a0-93dde6c00508-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:56:57.860490 master-0 kubenswrapper[33013]: I0313 10:56:57.860439 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 10:56:57.879948 master-0 kubenswrapper[33013]: I0313 10:56:57.879914 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 13 10:56:57.901598 master-0 kubenswrapper[33013]: I0313 10:56:57.901516 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 10:56:57.920608 master-0 kubenswrapper[33013]: I0313 10:56:57.920562 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 10:56:57.941137 master-0 kubenswrapper[33013]: I0313 10:56:57.941070 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 10:56:57.947726 master-0 kubenswrapper[33013]: I0313 10:56:57.947668 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-bin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.947835 master-0 kubenswrapper[33013]: I0313 10:56:57.947739 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-conf-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.947835 master-0 kubenswrapper[33013]: I0313 10:56:57.947770 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e485e709-32ba-442b-98e5-b4073516c0ab-hosts-file\") pod \"node-resolver-tfwn8\" (UID: \"e485e709-32ba-442b-98e5-b4073516c0ab\") " pod="openshift-dns/node-resolver-tfwn8" Mar 13 10:56:57.947900 master-0 kubenswrapper[33013]: I0313 10:56:57.947774 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-bin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.947900 master-0 kubenswrapper[33013]: I0313 10:56:57.947806 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-conf-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.947900 master-0 kubenswrapper[33013]: I0313 10:56:57.947845 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.948028 master-0 kubenswrapper[33013]: I0313 10:56:57.947970 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.948028 master-0 kubenswrapper[33013]: I0313 10:56:57.947991 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.948131 master-0 kubenswrapper[33013]: I0313 10:56:57.947992 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e485e709-32ba-442b-98e5-b4073516c0ab-hosts-file\") pod \"node-resolver-tfwn8\" (UID: \"e485e709-32ba-442b-98e5-b4073516c0ab\") " pod="openshift-dns/node-resolver-tfwn8" Mar 13 10:56:57.948131 master-0 kubenswrapper[33013]: I0313 10:56:57.948062 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.948211 master-0 kubenswrapper[33013]: I0313 10:56:57.948151 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-bin\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.948211 master-0 kubenswrapper[33013]: I0313 10:56:57.948186 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-bin\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.948211 master-0 kubenswrapper[33013]: I0313 10:56:57.948206 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-hostroot\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.948297 master-0 kubenswrapper[33013]: I0313 10:56:57.948245 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-sys\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.948297 master-0 kubenswrapper[33013]: I0313 10:56:57.948281 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-socket-dir-parent\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.948355 master-0 kubenswrapper[33013]: I0313 10:56:57.948298 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-hostroot\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.948454 master-0 kubenswrapper[33013]: I0313 10:56:57.948335 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-sys\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.948454 master-0 kubenswrapper[33013]: I0313 10:56:57.948367 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-multus-socket-dir-parent\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.948454 master-0 kubenswrapper[33013]: I0313 10:56:57.948383 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-netns\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.948557 master-0 kubenswrapper[33013]: I0313 10:56:57.948523 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b12e76f4-b960-4534-90e6-a2cdbecd1728-host-slash\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:56:57.948610 master-0 kubenswrapper[33013]: I0313 10:56:57.948575 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysctl-d\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.948707 master-0 kubenswrapper[33013]: I0313 10:56:57.948679 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d72d950-cfb4-4ed5-9ad6-f7266b937493-node-pullsecrets\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.948743 master-0 kubenswrapper[33013]: I0313 10:56:57.948722 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:57.948776 master-0 kubenswrapper[33013]: I0313 10:56:57.948757 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b10584c2-ef04-4649-bcb6-9222c9530c3f-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:56:57.948808 master-0 kubenswrapper[33013]: I0313 10:56:57.948777 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-system-cni-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.948943 master-0 kubenswrapper[33013]: I0313 10:56:57.948806 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.948943 master-0 kubenswrapper[33013]: I0313 10:56:57.948832 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-root\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.948943 master-0 kubenswrapper[33013]: I0313 10:56:57.948857 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-system-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.948943 master-0 kubenswrapper[33013]: I0313 10:56:57.948882 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-systemd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.949062 master-0 kubenswrapper[33013]: I0313 10:56:57.948969 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-etc-kubernetes\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.949062 master-0 kubenswrapper[33013]: I0313 10:56:57.948997 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d72d950-cfb4-4ed5-9ad6-f7266b937493-audit-dir\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.949121 master-0 kubenswrapper[33013]: I0313 10:56:57.949087 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysconfig\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.949154 master-0 kubenswrapper[33013]: I0313 10:56:57.949141 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-slash\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.949186 master-0 kubenswrapper[33013]: I0313 10:56:57.949175 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-systemd-units\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.949218 master-0 kubenswrapper[33013]: I0313 10:56:57.949195 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-audit-dir\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:57.949298 master-0 kubenswrapper[33013]: I0313 10:56:57.949276 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-kubelet\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.949345 master-0 kubenswrapper[33013]: I0313 10:56:57.949324 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b10584c2-ef04-4649-bcb6-9222c9530c3f-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:56:57.949376 master-0 kubenswrapper[33013]: I0313 10:56:57.949360 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b12e76f4-b960-4534-90e6-a2cdbecd1728-host-slash\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:56:57.949376 master-0 kubenswrapper[33013]: I0313 10:56:57.949372 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-k8s-cni-cncf-io\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.949437 master-0 kubenswrapper[33013]: I0313 10:56:57.949414 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-root\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.949472 master-0 kubenswrapper[33013]: I0313 10:56:57.949447 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1d72d950-cfb4-4ed5-9ad6-f7266b937493-node-pullsecrets\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.949472 master-0 kubenswrapper[33013]: I0313 10:56:57.949453 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-cnibin\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.949535 master-0 kubenswrapper[33013]: I0313 10:56:57.949495 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-modprobe-d\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.949535 master-0 kubenswrapper[33013]: I0313 10:56:57.949502 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-system-cni-dir\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.949535 master-0 kubenswrapper[33013]: I0313 10:56:57.949526 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysctl-conf\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.949735 master-0 kubenswrapper[33013]: I0313 10:56:57.949540 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-systemd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.949735 master-0 kubenswrapper[33013]: I0313 10:56:57.949575 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-etc-kubernetes\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.949735 master-0 kubenswrapper[33013]: I0313 10:56:57.949634 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d72d950-cfb4-4ed5-9ad6-f7266b937493-audit-dir\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.949735 master-0 kubenswrapper[33013]: I0313 10:56:57.949648 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:57.949735 master-0 kubenswrapper[33013]: I0313 10:56:57.949325 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.949735 master-0 kubenswrapper[33013]: I0313 10:56:57.948407 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-run-netns\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.950151 master-0 kubenswrapper[33013]: I0313 10:56:57.950126 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b10584c2-ef04-4649-bcb6-9222c9530c3f-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:56:57.950196 master-0 kubenswrapper[33013]: I0313 10:56:57.950161 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.950227 master-0 kubenswrapper[33013]: I0313 10:56:57.950210 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0ac1a605-d2d5-4004-96f5-121c20555bde-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:56:57.950266 master-0 kubenswrapper[33013]: I0313 10:56:57.950239 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-os-release\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.950299 master-0 kubenswrapper[33013]: I0313 10:56:57.950259 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.950299 master-0 kubenswrapper[33013]: I0313 10:56:57.950274 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b10584c2-ef04-4649-bcb6-9222c9530c3f-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:56:57.950299 master-0 kubenswrapper[33013]: I0313 10:56:57.950278 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-var-lib-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.950388 master-0 kubenswrapper[33013]: I0313 10:56:57.950331 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0ac1a605-d2d5-4004-96f5-121c20555bde-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:56:57.950388 master-0 kubenswrapper[33013]: I0313 10:56:57.950373 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-os-release\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.950481 master-0 kubenswrapper[33013]: I0313 10:56:57.950400 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-k8s-cni-cncf-io\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.950481 master-0 kubenswrapper[33013]: I0313 10:56:57.950408 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-cnibin\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.950481 master-0 kubenswrapper[33013]: I0313 10:56:57.950469 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysctl-d\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.950558 master-0 kubenswrapper[33013]: I0313 10:56:57.950522 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5843b0d4-a538-4261-b425-598e318c9d07-system-cni-dir\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:56:57.950558 master-0 kubenswrapper[33013]: I0313 10:56:57.950548 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-slash\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.950646 master-0 kubenswrapper[33013]: I0313 10:56:57.950560 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-wtmp\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.950687 master-0 kubenswrapper[33013]: I0313 10:56:57.950652 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-var-lib-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.950724 master-0 kubenswrapper[33013]: I0313 10:56:57.950686 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-modprobe-d\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.950724 master-0 kubenswrapper[33013]: I0313 10:56:57.950707 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-systemd-units\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.950724 master-0 kubenswrapper[33013]: I0313 10:56:57.950713 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysconfig\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.950820 master-0 kubenswrapper[33013]: I0313 10:56:57.950690 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-kubelet\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.950820 master-0 kubenswrapper[33013]: I0313 10:56:57.950695 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-wtmp\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:57.950820 master-0 kubenswrapper[33013]: I0313 10:56:57.950747 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-audit-dir\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:57.950820 master-0 kubenswrapper[33013]: I0313 10:56:57.950752 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-sysctl-conf\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.950820 master-0 kubenswrapper[33013]: I0313 10:56:57.950727 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-cnibin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.950820 master-0 kubenswrapper[33013]: I0313 10:56:57.950758 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-cnibin\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.950820 master-0 kubenswrapper[33013]: I0313 10:56:57.950797 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-os-release\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.951217 master-0 kubenswrapper[33013]: I0313 10:56:57.950895 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-netd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.951217 master-0 kubenswrapper[33013]: I0313 10:56:57.950902 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-os-release\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.951217 master-0 kubenswrapper[33013]: I0313 10:56:57.950943 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-host-cni-netd\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.951217 master-0 kubenswrapper[33013]: I0313 10:56:57.951000 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:56:57.951217 master-0 kubenswrapper[33013]: I0313 10:56:57.951058 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-rootfs\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:56:57.951217 master-0 kubenswrapper[33013]: I0313 10:56:57.951087 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:56:57.951217 master-0 kubenswrapper[33013]: I0313 10:56:57.951170 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-rootfs\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:56:57.951439 master-0 kubenswrapper[33013]: I0313 10:56:57.951201 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-multus-certs\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.951439 master-0 kubenswrapper[33013]: I0313 10:56:57.951253 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-multus-certs\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.951439 master-0 kubenswrapper[33013]: I0313 10:56:57.951421 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-node-log\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.951551 master-0 kubenswrapper[33013]: I0313 10:56:57.951454 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-node-log\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.951612 master-0 kubenswrapper[33013]: I0313 10:56:57.951542 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0ac1a605-d2d5-4004-96f5-121c20555bde-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:56:57.951700 master-0 kubenswrapper[33013]: I0313 10:56:57.951631 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-host-etc-kube\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:56:57.951740 master-0 kubenswrapper[33013]: I0313 10:56:57.951715 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-host\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.951826 master-0 kubenswrapper[33013]: I0313 10:56:57.951803 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/257a4a8b-014c-4473-80a0-e95cf6d41bf1-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:56:57.951886 master-0 kubenswrapper[33013]: I0313 10:56:57.951862 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/257a4a8b-014c-4473-80a0-e95cf6d41bf1-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:56:57.951989 master-0 kubenswrapper[33013]: I0313 10:56:57.951967 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-systemd\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.952057 master-0 kubenswrapper[33013]: I0313 10:56:57.952034 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/257a4a8b-014c-4473-80a0-e95cf6d41bf1-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:56:57.952092 master-0 kubenswrapper[33013]: I0313 10:56:57.951574 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0ac1a605-d2d5-4004-96f5-121c20555bde-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:56:57.952092 master-0 kubenswrapper[33013]: I0313 10:56:57.952045 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-host\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952063 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-lib-modules\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952096 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/257a4a8b-014c-4473-80a0-e95cf6d41bf1-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952113 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-systemd\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952095 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-host-etc-kube\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952139 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-sys\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952201 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-lib-modules\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952211 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-netns\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952207 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-sys\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952250 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-kubelet\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952301 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-etc-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952304 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-run-netns\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952356 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-kubelet\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952362 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-etc-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952506 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-kubernetes\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952543 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-etc-kubernetes\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952553 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-log-socket\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952572 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-multus\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952650 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-log-socket\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952694 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-run\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952721 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9aa4b44d-f202-4670-afab-44b38960026f-host-var-lib-cni-multus\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952724 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952773 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-ovn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952791 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-run\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952806 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-ovn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.952810 master-0 kubenswrapper[33013]: I0313 10:56:57.952818 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.953664 master-0 kubenswrapper[33013]: I0313 10:56:57.952904 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-run-openvswitch\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:56:57.953664 master-0 kubenswrapper[33013]: I0313 10:56:57.952907 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-var-lock\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:57.953664 master-0 kubenswrapper[33013]: I0313 10:56:57.952938 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-var-lock\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:57.953664 master-0 kubenswrapper[33013]: I0313 10:56:57.953014 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-var-lib-kubelet\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.953664 master-0 kubenswrapper[33013]: I0313 10:56:57.953086 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d9075a44-22d3-4562-819e-d5a92f013663-var-lib-kubelet\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:56:57.961451 master-0 kubenswrapper[33013]: I0313 10:56:57.961414 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 10:56:57.961796 master-0 kubenswrapper[33013]: I0313 10:56:57.961770 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-serving-cert\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:57.980660 master-0 kubenswrapper[33013]: I0313 10:56:57.980613 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 13 10:56:58.000407 master-0 kubenswrapper[33013]: I0313 10:56:58.000348 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 10:56:58.005306 master-0 kubenswrapper[33013]: I0313 10:56:58.005264 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-encryption-config\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:58.021185 master-0 kubenswrapper[33013]: I0313 10:56:58.021134 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 10:56:58.022121 master-0 kubenswrapper[33013]: I0313 10:56:58.022082 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:58.031072 master-0 kubenswrapper[33013]: I0313 10:56:58.031033 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:58.040824 master-0 kubenswrapper[33013]: I0313 10:56:58.040747 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 10:56:58.047258 master-0 kubenswrapper[33013]: I0313 10:56:58.047225 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-etcd-client\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:58.060462 master-0 kubenswrapper[33013]: I0313 10:56:58.060434 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 13 10:56:58.065067 master-0 kubenswrapper[33013]: I0313 10:56:58.065033 33013 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 10:56:58.066465 master-0 kubenswrapper[33013]: I0313 10:56:58.066410 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/257a4a8b-014c-4473-80a0-e95cf6d41bf1-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:56:58.071361 master-0 kubenswrapper[33013]: I0313 10:56:58.070204 33013 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 10:56:58.071361 master-0 kubenswrapper[33013]: I0313 10:56:58.070255 33013 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 10:56:58.071361 master-0 kubenswrapper[33013]: I0313 10:56:58.070269 33013 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 10:56:58.071361 master-0 kubenswrapper[33013]: I0313 10:56:58.070995 33013 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 10:56:58.080910 master-0 kubenswrapper[33013]: I0313 10:56:58.080872 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 10:56:58.082147 master-0 kubenswrapper[33013]: I0313 10:56:58.082126 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-trusted-ca-bundle\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:58.100913 master-0 kubenswrapper[33013]: I0313 10:56:58.100889 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 10:56:58.109638 master-0 kubenswrapper[33013]: I0313 10:56:58.109581 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11927952-723f-4d6d-922b-73139abe8877-config-volume\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:56:58.119985 master-0 kubenswrapper[33013]: I0313 10:56:58.119954 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 10:56:58.128173 master-0 kubenswrapper[33013]: I0313 10:56:58.128135 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-etcd-serving-ca\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:58.145966 master-0 kubenswrapper[33013]: I0313 10:56:58.145916 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 13 10:56:58.156665 master-0 kubenswrapper[33013]: I0313 10:56:58.156615 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-var-lock\") pod \"533638d2-44ce-4cf8-aa47-a6b89c94621d\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " Mar 13 10:56:58.156780 master-0 kubenswrapper[33013]: I0313 10:56:58.156749 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-var-lock" (OuterVolumeSpecName: "var-lock") pod "533638d2-44ce-4cf8-aa47-a6b89c94621d" (UID: "533638d2-44ce-4cf8-aa47-a6b89c94621d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:56:58.156919 master-0 kubenswrapper[33013]: I0313 10:56:58.156896 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-kubelet-dir\") pod \"533638d2-44ce-4cf8-aa47-a6b89c94621d\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " Mar 13 10:56:58.156985 master-0 kubenswrapper[33013]: I0313 10:56:58.156962 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "533638d2-44ce-4cf8-aa47-a6b89c94621d" (UID: "533638d2-44ce-4cf8-aa47-a6b89c94621d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:56:58.158938 master-0 kubenswrapper[33013]: I0313 10:56:58.158903 33013 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:56:58.159023 master-0 kubenswrapper[33013]: I0313 10:56:58.158949 33013 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533638d2-44ce-4cf8-aa47-a6b89c94621d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:56:58.159863 master-0 kubenswrapper[33013]: I0313 10:56:58.159820 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 10:56:58.165907 master-0 kubenswrapper[33013]: I0313 10:56:58.165874 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-etcd-client\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:58.180395 master-0 kubenswrapper[33013]: I0313 10:56:58.180343 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 10:56:58.188765 master-0 kubenswrapper[33013]: I0313 10:56:58.188694 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/11927952-723f-4d6d-922b-73139abe8877-metrics-tls\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:56:58.200778 master-0 kubenswrapper[33013]: I0313 10:56:58.200735 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 10:56:58.206618 master-0 kubenswrapper[33013]: I0313 10:56:58.206571 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-serving-cert\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:58.220373 master-0 kubenswrapper[33013]: I0313 10:56:58.220318 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 13 10:56:58.224020 master-0 kubenswrapper[33013]: I0313 10:56:58.223934 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/257a4a8b-014c-4473-80a0-e95cf6d41bf1-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:56:58.247401 master-0 kubenswrapper[33013]: I0313 10:56:58.247359 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 13 10:56:58.260766 master-0 kubenswrapper[33013]: I0313 10:56:58.260545 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 13 10:56:58.266139 master-0 kubenswrapper[33013]: I0313 10:56:58.266099 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-vkqtt\" (UID: \"a4e40b43-5a7d-4865-bd3c-ca5911bf3ee3\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt" Mar 13 10:56:58.280571 master-0 kubenswrapper[33013]: I0313 10:56:58.280334 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 10:56:58.287524 master-0 kubenswrapper[33013]: I0313 10:56:58.287482 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-etcd-serving-ca\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:58.302749 master-0 kubenswrapper[33013]: I0313 10:56:58.302447 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 10:56:58.303070 master-0 kubenswrapper[33013]: I0313 10:56:58.303043 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d72d950-cfb4-4ed5-9ad6-f7266b937493-encryption-config\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:58.320674 master-0 kubenswrapper[33013]: I0313 10:56:58.320633 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 10:56:58.330878 master-0 kubenswrapper[33013]: I0313 10:56:58.330843 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-audit-policies\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:56:58.341233 master-0 kubenswrapper[33013]: I0313 10:56:58.341048 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-ntlbj" Mar 13 10:56:58.361961 master-0 kubenswrapper[33013]: I0313 10:56:58.361893 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 10:56:58.366251 master-0 kubenswrapper[33013]: I0313 10:56:58.366190 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-config\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:58.380719 master-0 kubenswrapper[33013]: I0313 10:56:58.380679 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 13 10:56:58.381297 master-0 kubenswrapper[33013]: I0313 10:56:58.381235 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:56:58.400744 master-0 kubenswrapper[33013]: I0313 10:56:58.400715 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-lr8wh" Mar 13 10:56:58.420139 master-0 kubenswrapper[33013]: I0313 10:56:58.420092 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 10:56:58.422622 master-0 kubenswrapper[33013]: I0313 10:56:58.422583 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-image-import-ca\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:58.449174 master-0 kubenswrapper[33013]: I0313 10:56:58.449135 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 10:56:58.452939 master-0 kubenswrapper[33013]: I0313 10:56:58.452887 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-trusted-ca-bundle\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:58.459984 master-0 kubenswrapper[33013]: I0313 10:56:58.459900 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 10:56:58.464286 master-0 kubenswrapper[33013]: I0313 10:56:58.464256 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1d72d950-cfb4-4ed5-9ad6-f7266b937493-audit\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:56:58.481598 master-0 kubenswrapper[33013]: I0313 10:56:58.481542 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 10:56:58.500660 master-0 kubenswrapper[33013]: I0313 10:56:58.500612 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-xpxj2" Mar 13 10:56:58.521102 master-0 kubenswrapper[33013]: I0313 10:56:58.521068 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 10:56:58.541362 master-0 kubenswrapper[33013]: I0313 10:56:58.541311 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-x42l5" Mar 13 10:56:58.561677 master-0 kubenswrapper[33013]: I0313 10:56:58.561566 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 10:56:58.562966 master-0 kubenswrapper[33013]: I0313 10:56:58.562877 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:56:58.566424 master-0 kubenswrapper[33013]: I0313 10:56:58.566395 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:56:58.571458 master-0 kubenswrapper[33013]: I0313 10:56:58.571437 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-default-certificate\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:56:58.580999 master-0 kubenswrapper[33013]: I0313 10:56:58.580966 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 10:56:58.587950 master-0 kubenswrapper[33013]: I0313 10:56:58.587921 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-metrics-certs\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:56:58.600695 master-0 kubenswrapper[33013]: I0313 10:56:58.600642 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 10:56:58.602578 master-0 kubenswrapper[33013]: I0313 10:56:58.602544 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/eb778c86-ea51-4eab-82b8-a8e0bec0f050-stats-auth\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:56:58.620918 master-0 kubenswrapper[33013]: I0313 10:56:58.620895 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 10:56:58.626439 master-0 kubenswrapper[33013]: I0313 10:56:58.626387 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb778c86-ea51-4eab-82b8-a8e0bec0f050-service-ca-bundle\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:56:58.640723 master-0 kubenswrapper[33013]: I0313 10:56:58.640651 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 10:56:58.659812 master-0 kubenswrapper[33013]: I0313 10:56:58.659778 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 10:56:58.685115 master-0 kubenswrapper[33013]: I0313 10:56:58.685077 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 10:56:58.686742 master-0 kubenswrapper[33013]: I0313 10:56:58.686717 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ac1a605-d2d5-4004-96f5-121c20555bde-service-ca\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:56:58.701173 master-0 kubenswrapper[33013]: I0313 10:56:58.701145 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 10:56:58.702707 master-0 kubenswrapper[33013]: I0313 10:56:58.702676 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac1a605-d2d5-4004-96f5-121c20555bde-serving-cert\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:56:58.720768 master-0 kubenswrapper[33013]: I0313 10:56:58.720734 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 10:56:58.721046 master-0 kubenswrapper[33013]: I0313 10:56:58.721009 33013 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 13 10:56:58.738685 master-0 kubenswrapper[33013]: I0313 10:56:58.738657 33013 request.go:700] Waited for 1.020109226s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-bfgw8&limit=500&resourceVersion=0 Mar 13 10:56:58.740241 master-0 kubenswrapper[33013]: I0313 10:56:58.740187 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-bfgw8" Mar 13 10:56:58.760901 master-0 kubenswrapper[33013]: I0313 10:56:58.760872 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-zwgvd" Mar 13 10:56:58.780246 master-0 kubenswrapper[33013]: I0313 10:56:58.780189 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 10:56:58.781026 master-0 kubenswrapper[33013]: I0313 10:56:58.780994 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:56:58.785977 master-0 kubenswrapper[33013]: I0313 10:56:58.785934 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-mcd-auth-proxy-config\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:56:58.789964 master-0 kubenswrapper[33013]: I0313 10:56:58.789783 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:56:58.801199 master-0 kubenswrapper[33013]: I0313 10:56:58.801180 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 10:56:58.808322 master-0 kubenswrapper[33013]: I0313 10:56:58.808293 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-images\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:56:58.820717 master-0 kubenswrapper[33013]: I0313 10:56:58.820630 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 10:56:58.830833 master-0 kubenswrapper[33013]: E0313 10:56:58.830810 33013 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.830930 master-0 kubenswrapper[33013]: E0313 10:56:58.830893 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-kube-rbac-proxy-config podName:2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.330875811 +0000 UTC m=+2.806829160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-68b88f8cb5-2n8dn" (UID: "2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.830992 master-0 kubenswrapper[33013]: E0313 10:56:58.830810 33013 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.831075 master-0 kubenswrapper[33013]: E0313 10:56:58.831030 33013 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.831189 master-0 kubenswrapper[33013]: E0313 10:56:58.831156 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-proxy-tls podName:26cc0e72-8b4f-4087-89b9-05d2cf6df3f6 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.331063797 +0000 UTC m=+2.807017146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-proxy-tls") pod "machine-config-controller-ff46b7bdf-jtj5g" (UID: "26cc0e72-8b4f-4087-89b9-05d2cf6df3f6") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.831284 master-0 kubenswrapper[33013]: I0313 10:56:58.831225 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:56:58.831360 master-0 kubenswrapper[33013]: E0313 10:56:58.831277 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cluster-baremetal-operator-tls podName:070b85a0-f076-4750-aa00-dabba401dc75 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.331249132 +0000 UTC m=+2.807202481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-gsr52" (UID: "070b85a0-f076-4750-aa00-dabba401dc75") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.831459 master-0 kubenswrapper[33013]: E0313 10:56:58.831447 33013 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.831610 master-0 kubenswrapper[33013]: E0313 10:56:58.831563 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-proxy-tls podName:60e17cd1-c520-4d8d-8c72-47bf73b8cc66 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.33155369 +0000 UTC m=+2.807507039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-proxy-tls") pod "machine-config-daemon-gdfnq" (UID: "60e17cd1-c520-4d8d-8c72-47bf73b8cc66") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.832353 master-0 kubenswrapper[33013]: E0313 10:56:58.832334 33013 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.832432 master-0 kubenswrapper[33013]: E0313 10:56:58.832377 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca podName:a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.332367723 +0000 UTC m=+2.808321072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca") pod "controller-manager-867876d6b6-tpq67" (UID: "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.832432 master-0 kubenswrapper[33013]: E0313 10:56:58.832399 33013 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.832432 master-0 kubenswrapper[33013]: E0313 10:56:58.832423 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-auth-proxy-config podName:bfbaa57e-adac-48f8-8182-b4fdb42fbb9c nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.332415895 +0000 UTC m=+2.808369244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" (UID: "bfbaa57e-adac-48f8-8182-b4fdb42fbb9c") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.832574 master-0 kubenswrapper[33013]: E0313 10:56:58.832439 33013 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.832574 master-0 kubenswrapper[33013]: E0313 10:56:58.832459 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles podName:a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.332453716 +0000 UTC m=+2.808407055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles") pod "controller-manager-867876d6b6-tpq67" (UID: "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.832574 master-0 kubenswrapper[33013]: E0313 10:56:58.832479 33013 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.832574 master-0 kubenswrapper[33013]: E0313 10:56:58.832500 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-config podName:48f99840-4d9e-49c5-819e-0bb15493feb5 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.332494347 +0000 UTC m=+2.808447686 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-config") pod "machine-api-operator-84bf6db4f9-7h8nz" (UID: "48f99840-4d9e-49c5-819e-0bb15493feb5") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.832870 master-0 kubenswrapper[33013]: E0313 10:56:58.832854 33013 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.833044 master-0 kubenswrapper[33013]: E0313 10:56:58.833029 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-webhook-certs podName:14f6e3b2-716c-4392-b3c8-75b2168ccfb7 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.333017391 +0000 UTC m=+2.808970740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-webhook-certs") pod "multus-admission-controller-7769569c45-rshw5" (UID: "14f6e3b2-716c-4392-b3c8-75b2168ccfb7") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.833135 master-0 kubenswrapper[33013]: E0313 10:56:58.832855 33013 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.833237 master-0 kubenswrapper[33013]: E0313 10:56:58.833226 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-federate-client-tls podName:939a3da3-62e7-4376-853d-dc333465446c nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.333218057 +0000 UTC m=+2.809171406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-federate-client-tls") pod "telemeter-client-6745c97c48-85rlf" (UID: "939a3da3-62e7-4376-853d-dc333465446c") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.833350 master-0 kubenswrapper[33013]: E0313 10:56:58.833075 33013 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.833454 master-0 kubenswrapper[33013]: E0313 10:56:58.833444 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-cert podName:d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.333435623 +0000 UTC m=+2.809388972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-cert") pod "cluster-autoscaler-operator-69576476f7-pzjxd" (UID: "d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.833552 master-0 kubenswrapper[33013]: E0313 10:56:58.833096 33013 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.833686 master-0 kubenswrapper[33013]: E0313 10:56:58.833675 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-webhook-cert podName:1edde4bf-4554-4ab2-b588-513ad84a9bae nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.3336673 +0000 UTC m=+2.809620639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-webhook-cert") pod "packageserver-7b564dfc5b-qc9cq" (UID: "1edde4bf-4554-4ab2-b588-513ad84a9bae") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.833790 master-0 kubenswrapper[33013]: E0313 10:56:58.833254 33013 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.833886 master-0 kubenswrapper[33013]: E0313 10:56:58.833875 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5448b59a-b731-45a3-9ded-d25315f597fb-metrics-client-ca podName:5448b59a-b731-45a3-9ded-d25315f597fb nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.333865395 +0000 UTC m=+2.809818744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/5448b59a-b731-45a3-9ded-d25315f597fb-metrics-client-ca") pod "openshift-state-metrics-74cc79fd76-jxrlm" (UID: "5448b59a-b731-45a3-9ded-d25315f597fb") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.833981 master-0 kubenswrapper[33013]: E0313 10:56:58.833279 33013 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7mk1tpvcusf46: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.834071 master-0 kubenswrapper[33013]: E0313 10:56:58.834061 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.33405329 +0000 UTC m=+2.810006639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.834882 master-0 kubenswrapper[33013]: E0313 10:56:58.834869 33013 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.835030 master-0 kubenswrapper[33013]: E0313 10:56:58.835019 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-tls podName:9d8af021-f20f-48a2-8b2a-3a5a3f37237f nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.335010337 +0000 UTC m=+2.810963686 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-nqnlp" (UID: "9d8af021-f20f-48a2-8b2a-3a5a3f37237f") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.836118 master-0 kubenswrapper[33013]: E0313 10:56:58.836106 33013 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.836253 master-0 kubenswrapper[33013]: E0313 10:56:58.836243 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-metrics-server-audit-profiles podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.336234932 +0000 UTC m=+2.812188281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-metrics-server-audit-profiles") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.836321 master-0 kubenswrapper[33013]: E0313 10:56:58.836191 33013 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.836404 master-0 kubenswrapper[33013]: E0313 10:56:58.836394 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-machine-approver-tls podName:ec121f87-93ea-468c-a25f-2ec5e7d0e0ee nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.336386716 +0000 UTC m=+2.812340065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-machine-approver-tls") pod "machine-approver-754bdc9f9d-jcn8f" (UID: "ec121f87-93ea-468c-a25f-2ec5e7d0e0ee") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.836481 master-0 kubenswrapper[33013]: E0313 10:56:58.836211 33013 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.836609 master-0 kubenswrapper[33013]: E0313 10:56:58.836577 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client-kube-rbac-proxy-config podName:939a3da3-62e7-4376-853d-dc333465446c nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.336570841 +0000 UTC m=+2.812524190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6745c97c48-85rlf" (UID: "939a3da3-62e7-4376-853d-dc333465446c") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.836707 master-0 kubenswrapper[33013]: E0313 10:56:58.836544 33013 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.836805 master-0 kubenswrapper[33013]: E0313 10:56:58.836795 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-serving-certs-ca-bundle podName:939a3da3-62e7-4376-853d-dc333465446c nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.336786347 +0000 UTC m=+2.812739696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-serving-certs-ca-bundle") pod "telemeter-client-6745c97c48-85rlf" (UID: "939a3da3-62e7-4376-853d-dc333465446c") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.836911 master-0 kubenswrapper[33013]: E0313 10:56:58.836889 33013 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.836991 master-0 kubenswrapper[33013]: E0313 10:56:58.836981 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-metrics-client-ca podName:2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.336973852 +0000 UTC m=+2.812927191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-metrics-client-ca") pod "kube-state-metrics-68b88f8cb5-2n8dn" (UID: "2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.837176 master-0 kubenswrapper[33013]: E0313 10:56:58.837159 33013 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.837220 master-0 kubenswrapper[33013]: E0313 10:56:58.837181 33013 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.837276 master-0 kubenswrapper[33013]: E0313 10:56:58.837202 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca podName:c09f42db-e6d7-469d-9761-88a879f6aa6b nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.337188888 +0000 UTC m=+2.813142237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca") pod "route-controller-manager-7d9bd68fd6-lwnzl" (UID: "c09f42db-e6d7-469d-9761-88a879f6aa6b") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.837317 master-0 kubenswrapper[33013]: E0313 10:56:58.837302 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert podName:c09f42db-e6d7-469d-9761-88a879f6aa6b nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.337277741 +0000 UTC m=+2.813231290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert") pod "route-controller-manager-7d9bd68fd6-lwnzl" (UID: "c09f42db-e6d7-469d-9761-88a879f6aa6b") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.837365 master-0 kubenswrapper[33013]: E0313 10:56:58.837336 33013 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.837403 master-0 kubenswrapper[33013]: E0313 10:56:58.837383 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-kube-rbac-proxy-config podName:5448b59a-b731-45a3-9ded-d25315f597fb nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.337372553 +0000 UTC m=+2.813326122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-74cc79fd76-jxrlm" (UID: "5448b59a-b731-45a3-9ded-d25315f597fb") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.838393 master-0 kubenswrapper[33013]: E0313 10:56:58.838371 33013 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.838459 master-0 kubenswrapper[33013]: E0313 10:56:58.838425 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48f99840-4d9e-49c5-819e-0bb15493feb5-machine-api-operator-tls podName:48f99840-4d9e-49c5-819e-0bb15493feb5 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.338413563 +0000 UTC m=+2.814366912 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/48f99840-4d9e-49c5-819e-0bb15493feb5-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-7h8nz" (UID: "48f99840-4d9e-49c5-819e-0bb15493feb5") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.838492 master-0 kubenswrapper[33013]: E0313 10:56:58.838459 33013 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.838492 master-0 kubenswrapper[33013]: E0313 10:56:58.838481 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-auth-proxy-config podName:d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.338474924 +0000 UTC m=+2.814428263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-auth-proxy-config") pod "cluster-autoscaler-operator-69576476f7-pzjxd" (UID: "d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.838548 master-0 kubenswrapper[33013]: E0313 10:56:58.838493 33013 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.838548 master-0 kubenswrapper[33013]: E0313 10:56:58.838517 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cloud-credential-operator-serving-cert podName:4e6ecc16-19cb-4b66-801f-b958b10d0ce7 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.338511255 +0000 UTC m=+2.814464604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-t8ll8" (UID: "4e6ecc16-19cb-4b66-801f-b958b10d0ce7") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.839069 master-0 kubenswrapper[33013]: E0313 10:56:58.839032 33013 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.839204 master-0 kubenswrapper[33013]: E0313 10:56:58.839187 33013 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.839420 master-0 kubenswrapper[33013]: E0313 10:56:58.839332 33013 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.839518 master-0 kubenswrapper[33013]: E0313 10:56:58.839357 33013 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.839638 master-0 kubenswrapper[33013]: E0313 10:56:58.839620 33013 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.839736 master-0 kubenswrapper[33013]: E0313 10:56:58.839375 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-client-certs podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.339083761 +0000 UTC m=+2.815037290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-client-certs") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.839873 master-0 kubenswrapper[33013]: E0313 10:56:58.839862 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c87545aa-11c2-4e6e-8c13-16eeff3be83b-serving-cert podName:c87545aa-11c2-4e6e-8c13-16eeff3be83b nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.339848973 +0000 UTC m=+2.815802322 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c87545aa-11c2-4e6e-8c13-16eeff3be83b-serving-cert") pod "insights-operator-8f89dfddd-nhsd9" (UID: "c87545aa-11c2-4e6e-8c13-16eeff3be83b") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.839986 master-0 kubenswrapper[33013]: E0313 10:56:58.839974 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-apiservice-cert podName:1edde4bf-4554-4ab2-b588-513ad84a9bae nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.339964246 +0000 UTC m=+2.815917595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-apiservice-cert") pod "packageserver-7b564dfc5b-qc9cq" (UID: "1edde4bf-4554-4ab2-b588-513ad84a9bae") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.840139 master-0 kubenswrapper[33013]: E0313 10:56:58.840126 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-config podName:ec121f87-93ea-468c-a25f-2ec5e7d0e0ee nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.34011551 +0000 UTC m=+2.816068949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-config") pod "machine-approver-754bdc9f9d-jcn8f" (UID: "ec121f87-93ea-468c-a25f-2ec5e7d0e0ee") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.840288 master-0 kubenswrapper[33013]: E0313 10:56:58.840276 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-certs podName:4df756f0-c6b6-4730-842a-7ee9227397ae nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.340267394 +0000 UTC m=+2.816220743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-certs") pod "machine-config-server-mhk8z" (UID: "4df756f0-c6b6-4730-842a-7ee9227397ae") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.840436 master-0 kubenswrapper[33013]: E0313 10:56:58.840385 33013 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.840525 master-0 kubenswrapper[33013]: E0313 10:56:58.840500 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-telemeter-trusted-ca-bundle podName:939a3da3-62e7-4376-853d-dc333465446c nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.34046992 +0000 UTC m=+2.816423429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-telemeter-trusted-ca-bundle") pod "telemeter-client-6745c97c48-85rlf" (UID: "939a3da3-62e7-4376-853d-dc333465446c") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.840668 master-0 kubenswrapper[33013]: I0313 10:56:58.840633 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-bk5cz" Mar 13 10:56:58.842732 master-0 kubenswrapper[33013]: E0313 10:56:58.842655 33013 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.842796 master-0 kubenswrapper[33013]: E0313 10:56:58.842746 33013 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.842829 master-0 kubenswrapper[33013]: E0313 10:56:58.842807 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-metrics-client-ca podName:9d8af021-f20f-48a2-8b2a-3a5a3f37237f nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.342781005 +0000 UTC m=+2.818734524 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-metrics-client-ca") pod "prometheus-operator-5ff8674d55-nqnlp" (UID: "9d8af021-f20f-48a2-8b2a-3a5a3f37237f") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.842868 master-0 kubenswrapper[33013]: E0313 10:56:58.842661 33013 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.842868 master-0 kubenswrapper[33013]: E0313 10:56:58.842847 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-kube-rbac-proxy-config podName:5b796628-a6ca-4d5c-9870-0ca60b9372aa nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.342830986 +0000 UTC m=+2.818784375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-kube-rbac-proxy-config") pod "node-exporter-mtcsw" (UID: "5b796628-a6ca-4d5c-9870-0ca60b9372aa") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.842926 master-0 kubenswrapper[33013]: E0313 10:56:58.842667 33013 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.842926 master-0 kubenswrapper[33013]: E0313 10:56:58.842899 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-telemeter-client-tls podName:939a3da3-62e7-4376-853d-dc333465446c nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.342875898 +0000 UTC m=+2.818829377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-telemeter-client-tls") pod "telemeter-client-6745c97c48-85rlf" (UID: "939a3da3-62e7-4376-853d-dc333465446c") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.842990 master-0 kubenswrapper[33013]: E0313 10:56:58.842937 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-control-plane-machine-set-operator-tls podName:484e6d0b-d057-4658-8e49-bbe7e6f6ee86 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.342918289 +0000 UTC m=+2.818871858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6686554ddc-hszft" (UID: "484e6d0b-d057-4658-8e49-bbe7e6f6ee86") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.842990 master-0 kubenswrapper[33013]: E0313 10:56:58.842955 33013 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.843048 master-0 kubenswrapper[33013]: E0313 10:56:58.842991 33013 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.843048 master-0 kubenswrapper[33013]: E0313 10:56:58.843023 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-cert podName:05a72a4c-5ce8-49d1-8e4f-334f63d4e987 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.343001811 +0000 UTC m=+2.818955190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-cert") pod "ingress-canary-dxhl9" (UID: "05a72a4c-5ce8-49d1-8e4f-334f63d4e987") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.843103 master-0 kubenswrapper[33013]: E0313 10:56:58.843055 33013 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.843103 master-0 kubenswrapper[33013]: E0313 10:56:58.843093 33013 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.843168 master-0 kubenswrapper[33013]: E0313 10:56:58.843060 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cert podName:070b85a0-f076-4750-aa00-dabba401dc75 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.343044442 +0000 UTC m=+2.818997831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cert") pod "cluster-baremetal-operator-5cdb4c5598-gsr52" (UID: "070b85a0-f076-4750-aa00-dabba401dc75") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.843168 master-0 kubenswrapper[33013]: E0313 10:56:58.843147 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9da11462-a91d-4d02-8614-78b4c5b2f7e2-cluster-storage-operator-serving-cert podName:9da11462-a91d-4d02-8614-78b4c5b2f7e2 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.343126055 +0000 UTC m=+2.819079444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/9da11462-a91d-4d02-8614-78b4c5b2f7e2-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-6fbfc8dc8f-fdt9m" (UID: "9da11462-a91d-4d02-8614-78b4c5b2f7e2") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.843252 master-0 kubenswrapper[33013]: E0313 10:56:58.843179 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-node-bootstrap-token podName:4df756f0-c6b6-4730-842a-7ee9227397ae nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.343163156 +0000 UTC m=+2.819116725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-node-bootstrap-token") pod "machine-config-server-mhk8z" (UID: "4df756f0-c6b6-4730-842a-7ee9227397ae") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.843252 master-0 kubenswrapper[33013]: E0313 10:56:58.843201 33013 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.843324 master-0 kubenswrapper[33013]: E0313 10:56:58.843266 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config podName:a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.343246858 +0000 UTC m=+2.819200237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config") pod "controller-manager-867876d6b6-tpq67" (UID: "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.843416 master-0 kubenswrapper[33013]: E0313 10:56:58.843402 33013 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.843491 master-0 kubenswrapper[33013]: E0313 10:56:58.843450 33013 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.843570 master-0 kubenswrapper[33013]: E0313 10:56:58.843559 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-config podName:070b85a0-f076-4750-aa00-dabba401dc75 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.343549066 +0000 UTC m=+2.819502415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-config") pod "cluster-baremetal-operator-5cdb4c5598-gsr52" (UID: "070b85a0-f076-4750-aa00-dabba401dc75") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.843684 master-0 kubenswrapper[33013]: E0313 10:56:58.843672 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client podName:939a3da3-62e7-4376-853d-dc333465446c nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.34366178 +0000 UTC m=+2.819615129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client") pod "telemeter-client-6745c97c48-85rlf" (UID: "939a3da3-62e7-4376-853d-dc333465446c") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.843765 master-0 kubenswrapper[33013]: E0313 10:56:58.843689 33013 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.843834 master-0 kubenswrapper[33013]: E0313 10:56:58.843804 33013 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.843891 master-0 kubenswrapper[33013]: E0313 10:56:58.843872 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert podName:a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.343854295 +0000 UTC m=+2.819807664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert") pod "controller-manager-867876d6b6-tpq67" (UID: "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.843960 master-0 kubenswrapper[33013]: E0313 10:56:58.843949 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-configmap-kubelet-serving-ca-bundle podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.343939157 +0000 UTC m=+2.819892496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-configmap-kubelet-serving-ca-bundle") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.844024 master-0 kubenswrapper[33013]: E0313 10:56:58.843978 33013 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.844096 master-0 kubenswrapper[33013]: E0313 10:56:58.844058 33013 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.844164 master-0 kubenswrapper[33013]: E0313 10:56:58.844154 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-service-ca-bundle podName:c87545aa-11c2-4e6e-8c13-16eeff3be83b nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.344145713 +0000 UTC m=+2.820099062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-service-ca-bundle") pod "insights-operator-8f89dfddd-nhsd9" (UID: "c87545aa-11c2-4e6e-8c13-16eeff3be83b") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.844243 master-0 kubenswrapper[33013]: E0313 10:56:58.844233 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-cloud-controller-manager-operator-tls podName:bfbaa57e-adac-48f8-8182-b4fdb42fbb9c nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.344224515 +0000 UTC m=+2.820177854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" (UID: "bfbaa57e-adac-48f8-8182-b4fdb42fbb9c") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.845436 master-0 kubenswrapper[33013]: E0313 10:56:58.845410 33013 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.845486 master-0 kubenswrapper[33013]: E0313 10:56:58.845461 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-trusted-ca-bundle podName:c87545aa-11c2-4e6e-8c13-16eeff3be83b nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.34544683 +0000 UTC m=+2.821400179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-trusted-ca-bundle") pod "insights-operator-8f89dfddd-nhsd9" (UID: "c87545aa-11c2-4e6e-8c13-16eeff3be83b") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.845637 master-0 kubenswrapper[33013]: E0313 10:56:58.845501 33013 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.845637 master-0 kubenswrapper[33013]: E0313 10:56:58.845536 33013 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.845637 master-0 kubenswrapper[33013]: E0313 10:56:58.845546 33013 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.845637 master-0 kubenswrapper[33013]: E0313 10:56:58.845577 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-auth-proxy-config podName:ec121f87-93ea-468c-a25f-2ec5e7d0e0ee nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.345554083 +0000 UTC m=+2.821507602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-auth-proxy-config") pod "machine-approver-754bdc9f9d-jcn8f" (UID: "ec121f87-93ea-468c-a25f-2ec5e7d0e0ee") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.845637 master-0 kubenswrapper[33013]: E0313 10:56:58.845622 33013 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.845637 master-0 kubenswrapper[33013]: E0313 10:56:58.845625 33013 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.845819 master-0 kubenswrapper[33013]: E0313 10:56:58.845645 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-tls podName:5448b59a-b731-45a3-9ded-d25315f597fb nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.345625505 +0000 UTC m=+2.821579104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-tls") pod "openshift-state-metrics-74cc79fd76-jxrlm" (UID: "5448b59a-b731-45a3-9ded-d25315f597fb") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.845819 master-0 kubenswrapper[33013]: E0313 10:56:58.845649 33013 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.845819 master-0 kubenswrapper[33013]: E0313 10:56:58.845685 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config podName:c09f42db-e6d7-469d-9761-88a879f6aa6b nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.345668756 +0000 UTC m=+2.821622335 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config") pod "route-controller-manager-7d9bd68fd6-lwnzl" (UID: "c09f42db-e6d7-469d-9761-88a879f6aa6b") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.845819 master-0 kubenswrapper[33013]: E0313 10:56:58.845716 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-tls podName:5b796628-a6ca-4d5c-9870-0ca60b9372aa nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.345700727 +0000 UTC m=+2.821654316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-tls") pod "node-exporter-mtcsw" (UID: "5b796628-a6ca-4d5c-9870-0ca60b9372aa") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.845819 master-0 kubenswrapper[33013]: E0313 10:56:58.845505 33013 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.845819 master-0 kubenswrapper[33013]: E0313 10:56:58.845750 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-tls podName:2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.345733478 +0000 UTC m=+2.821687067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-tls") pod "kube-state-metrics-68b88f8cb5-2n8dn" (UID: "2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.845819 master-0 kubenswrapper[33013]: E0313 10:56:58.845509 33013 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.845819 master-0 kubenswrapper[33013]: E0313 10:56:58.845784 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-images podName:070b85a0-f076-4750-aa00-dabba401dc75 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.345766598 +0000 UTC m=+2.821720197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-images") pod "cluster-baremetal-operator-5cdb4c5598-gsr52" (UID: "070b85a0-f076-4750-aa00-dabba401dc75") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.845819 master-0 kubenswrapper[33013]: E0313 10:56:58.845820 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-images podName:48f99840-4d9e-49c5-819e-0bb15493feb5 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.34580544 +0000 UTC m=+2.821759029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-images") pod "machine-api-operator-84bf6db4f9-7h8nz" (UID: "48f99840-4d9e-49c5-819e-0bb15493feb5") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.846067 master-0 kubenswrapper[33013]: E0313 10:56:58.845602 33013 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.846067 master-0 kubenswrapper[33013]: E0313 10:56:58.845852 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-kube-rbac-proxy-config podName:9d8af021-f20f-48a2-8b2a-3a5a3f37237f nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.34583651 +0000 UTC m=+2.821790109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-5ff8674d55-nqnlp" (UID: "9d8af021-f20f-48a2-8b2a-3a5a3f37237f") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.846067 master-0 kubenswrapper[33013]: E0313 10:56:58.845903 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-images podName:bfbaa57e-adac-48f8-8182-b4fdb42fbb9c nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.345878632 +0000 UTC m=+2.821832171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-images") pod "cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" (UID: "bfbaa57e-adac-48f8-8182-b4fdb42fbb9c") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.846067 master-0 kubenswrapper[33013]: E0313 10:56:58.845922 33013 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.846067 master-0 kubenswrapper[33013]: E0313 10:56:58.845994 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-metrics-client-ca podName:939a3da3-62e7-4376-853d-dc333465446c nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.345979624 +0000 UTC m=+2.821933013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-metrics-client-ca") pod "telemeter-client-6745c97c48-85rlf" (UID: "939a3da3-62e7-4376-853d-dc333465446c") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.846248 master-0 kubenswrapper[33013]: E0313 10:56:58.846233 33013 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.846344 master-0 kubenswrapper[33013]: E0313 10:56:58.846333 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b796628-a6ca-4d5c-9870-0ca60b9372aa-metrics-client-ca podName:5b796628-a6ca-4d5c-9870-0ca60b9372aa nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.346322934 +0000 UTC m=+2.822276283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/5b796628-a6ca-4d5c-9870-0ca60b9372aa-metrics-client-ca") pod "node-exporter-mtcsw" (UID: "5b796628-a6ca-4d5c-9870-0ca60b9372aa") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.846795 master-0 kubenswrapper[33013]: E0313 10:56:58.846769 33013 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.846848 master-0 kubenswrapper[33013]: E0313 10:56:58.846818 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.346808508 +0000 UTC m=+2.822761857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.846848 master-0 kubenswrapper[33013]: E0313 10:56:58.846843 33013 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.846911 master-0 kubenswrapper[33013]: E0313 10:56:58.846865 33013 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.846944 master-0 kubenswrapper[33013]: E0313 10:56:58.846875 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86774fd7-7c26-4b41-badb-de1004397637-samples-operator-tls podName:86774fd7-7c26-4b41-badb-de1004397637 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.346868129 +0000 UTC m=+2.822821478 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/86774fd7-7c26-4b41-badb-de1004397637-samples-operator-tls") pod "cluster-samples-operator-664cb58b85-mq7rm" (UID: "86774fd7-7c26-4b41-badb-de1004397637") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:56:58.846975 master-0 kubenswrapper[33013]: E0313 10:56:58.846963 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-custom-resource-state-configmap podName:2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.346945652 +0000 UTC m=+2.822899031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-68b88f8cb5-2n8dn" (UID: "2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.847045 master-0 kubenswrapper[33013]: E0313 10:56:58.847030 33013 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.847131 master-0 kubenswrapper[33013]: E0313 10:56:58.847121 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cco-trusted-ca podName:4e6ecc16-19cb-4b66-801f-b958b10d0ce7 nodeName:}" failed. No retries permitted until 2026-03-13 10:56:59.347111666 +0000 UTC m=+2.823065015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cco-trusted-ca") pod "cloud-credential-operator-55d85b7b47-t8ll8" (UID: "4e6ecc16-19cb-4b66-801f-b958b10d0ce7") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:56:58.860873 master-0 kubenswrapper[33013]: I0313 10:56:58.860853 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 10:56:58.881399 master-0 kubenswrapper[33013]: I0313 10:56:58.881350 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 10:56:58.901314 master-0 kubenswrapper[33013]: I0313 10:56:58.901280 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 13 10:56:58.920272 master-0 kubenswrapper[33013]: I0313 10:56:58.920243 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 10:56:58.940819 master-0 kubenswrapper[33013]: I0313 10:56:58.940776 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-4j4rp" Mar 13 10:56:58.960768 master-0 kubenswrapper[33013]: I0313 10:56:58.960732 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 13 10:56:58.981765 master-0 kubenswrapper[33013]: I0313 10:56:58.981723 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-4gsfk" Mar 13 10:56:59.013748 master-0 kubenswrapper[33013]: I0313 10:56:59.013684 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 13 10:56:59.019911 master-0 kubenswrapper[33013]: I0313 10:56:59.019882 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-q8ddz" Mar 13 10:56:59.028786 master-0 kubenswrapper[33013]: I0313 10:56:59.028741 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:56:59.040766 master-0 kubenswrapper[33013]: I0313 10:56:59.040744 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 10:56:59.060110 master-0 kubenswrapper[33013]: I0313 10:56:59.060076 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-frjx4" Mar 13 10:56:59.080283 master-0 kubenswrapper[33013]: I0313 10:56:59.080110 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 10:56:59.100821 master-0 kubenswrapper[33013]: I0313 10:56:59.100751 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-4mksc" Mar 13 10:56:59.121101 master-0 kubenswrapper[33013]: I0313 10:56:59.121050 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 13 10:56:59.139566 master-0 kubenswrapper[33013]: I0313 10:56:59.139525 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 13 10:56:59.160469 master-0 kubenswrapper[33013]: I0313 10:56:59.160420 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 13 10:56:59.189664 master-0 kubenswrapper[33013]: I0313 10:56:59.183636 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 10:56:59.200335 master-0 kubenswrapper[33013]: I0313 10:56:59.200275 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 13 10:56:59.220273 master-0 kubenswrapper[33013]: I0313 10:56:59.220186 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 10:56:59.240039 master-0 kubenswrapper[33013]: I0313 10:56:59.239960 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 10:56:59.260108 master-0 kubenswrapper[33013]: I0313 10:56:59.260055 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 13 10:56:59.280915 master-0 kubenswrapper[33013]: I0313 10:56:59.280874 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 13 10:56:59.299840 master-0 kubenswrapper[33013]: I0313 10:56:59.299760 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 13 10:56:59.320023 master-0 kubenswrapper[33013]: I0313 10:56:59.319972 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 13 10:56:59.340647 master-0 kubenswrapper[33013]: I0313 10:56:59.340465 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 13 10:56:59.359737 master-0 kubenswrapper[33013]: I0313 10:56:59.359680 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-t9jpj" Mar 13 10:56:59.380806 master-0 kubenswrapper[33013]: I0313 10:56:59.380732 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-chx8x" Mar 13 10:56:59.393792 master-0 kubenswrapper[33013]: I0313 10:56:59.393712 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:56:59.394049 master-0 kubenswrapper[33013]: I0313 10:56:59.393982 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:56:59.394164 master-0 kubenswrapper[33013]: I0313 10:56:59.394105 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:56:59.394248 master-0 kubenswrapper[33013]: I0313 10:56:59.394207 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:56:59.394385 master-0 kubenswrapper[33013]: I0313 10:56:59.394357 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:56:59.394448 master-0 kubenswrapper[33013]: I0313 10:56:59.394394 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:56:59.394501 master-0 kubenswrapper[33013]: I0313 10:56:59.394461 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/48f99840-4d9e-49c5-819e-0bb15493feb5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:56:59.395685 master-0 kubenswrapper[33013]: I0313 10:56:59.394724 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:56:59.395685 master-0 kubenswrapper[33013]: I0313 10:56:59.394813 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-client-certs\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:56:59.395685 master-0 kubenswrapper[33013]: I0313 10:56:59.394971 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c87545aa-11c2-4e6e-8c13-16eeff3be83b-serving-cert\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:56:59.395685 master-0 kubenswrapper[33013]: I0313 10:56:59.395059 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-apiservice-cert\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:56:59.395685 master-0 kubenswrapper[33013]: I0313 10:56:59.395087 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-config\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:56:59.395685 master-0 kubenswrapper[33013]: I0313 10:56:59.395140 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-certs\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:56:59.395685 master-0 kubenswrapper[33013]: I0313 10:56:59.395136 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:56:59.395685 master-0 kubenswrapper[33013]: I0313 10:56:59.395257 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:59.395685 master-0 kubenswrapper[33013]: I0313 10:56:59.395296 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c87545aa-11c2-4e6e-8c13-16eeff3be83b-serving-cert\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:56:59.395685 master-0 kubenswrapper[33013]: I0313 10:56:59.395302 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:56:59.395685 master-0 kubenswrapper[33013]: I0313 10:56:59.395451 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-telemeter-client-tls\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:59.395685 master-0 kubenswrapper[33013]: I0313 10:56:59.395512 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-hszft\" (UID: \"484e6d0b-d057-4658-8e49-bbe7e6f6ee86\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" Mar 13 10:56:59.395685 master-0 kubenswrapper[33013]: I0313 10:56:59.395575 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-node-bootstrap-token\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:56:59.395685 master-0 kubenswrapper[33013]: I0313 10:56:59.395653 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:59.396239 master-0 kubenswrapper[33013]: I0313 10:56:59.395701 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/9da11462-a91d-4d02-8614-78b4c5b2f7e2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-fdt9m\" (UID: \"9da11462-a91d-4d02-8614-78b4c5b2f7e2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" Mar 13 10:56:59.396239 master-0 kubenswrapper[33013]: I0313 10:56:59.395749 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:56:59.396239 master-0 kubenswrapper[33013]: I0313 10:56:59.395787 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:56:59.396239 master-0 kubenswrapper[33013]: I0313 10:56:59.395794 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-hszft\" (UID: \"484e6d0b-d057-4658-8e49-bbe7e6f6ee86\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" Mar 13 10:56:59.396239 master-0 kubenswrapper[33013]: I0313 10:56:59.395875 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:56:59.396239 master-0 kubenswrapper[33013]: I0313 10:56:59.395905 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-cert\") pod \"ingress-canary-dxhl9\" (UID: \"05a72a4c-5ce8-49d1-8e4f-334f63d4e987\") " pod="openshift-ingress-canary/ingress-canary-dxhl9" Mar 13 10:56:59.396239 master-0 kubenswrapper[33013]: I0313 10:56:59.396233 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/9da11462-a91d-4d02-8614-78b4c5b2f7e2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-fdt9m\" (UID: \"9da11462-a91d-4d02-8614-78b4c5b2f7e2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" Mar 13 10:56:59.396526 master-0 kubenswrapper[33013]: I0313 10:56:59.396239 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:59.396526 master-0 kubenswrapper[33013]: I0313 10:56:59.396318 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-config\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:56:59.396526 master-0 kubenswrapper[33013]: I0313 10:56:59.396389 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:56:59.396526 master-0 kubenswrapper[33013]: I0313 10:56:59.396387 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:56:59.396526 master-0 kubenswrapper[33013]: I0313 10:56:59.396489 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:56:59.396526 master-0 kubenswrapper[33013]: I0313 10:56:59.396516 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-config\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:56:59.396526 master-0 kubenswrapper[33013]: I0313 10:56:59.396528 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:56:59.396844 master-0 kubenswrapper[33013]: I0313 10:56:59.396686 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-service-ca-bundle\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:56:59.396844 master-0 kubenswrapper[33013]: I0313 10:56:59.396744 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:56:59.396844 master-0 kubenswrapper[33013]: I0313 10:56:59.396813 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-tls\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:59.396976 master-0 kubenswrapper[33013]: I0313 10:56:59.396847 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:56:59.396976 master-0 kubenswrapper[33013]: I0313 10:56:59.396878 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:56:59.396976 master-0 kubenswrapper[33013]: I0313 10:56:59.396910 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:56:59.396976 master-0 kubenswrapper[33013]: I0313 10:56:59.396937 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-metrics-client-ca\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:59.397142 master-0 kubenswrapper[33013]: I0313 10:56:59.397052 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:56:59.397142 master-0 kubenswrapper[33013]: I0313 10:56:59.397093 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:56:59.397142 master-0 kubenswrapper[33013]: I0313 10:56:59.397123 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5b796628-a6ca-4d5c-9870-0ca60b9372aa-metrics-client-ca\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:56:59.397266 master-0 kubenswrapper[33013]: I0313 10:56:59.397150 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-images\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:56:59.397375 master-0 kubenswrapper[33013]: I0313 10:56:59.397334 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-images\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:56:59.397492 master-0 kubenswrapper[33013]: I0313 10:56:59.397469 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:56:59.397536 master-0 kubenswrapper[33013]: I0313 10:56:59.397506 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/86774fd7-7c26-4b41-badb-de1004397637-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mq7rm\" (UID: \"86774fd7-7c26-4b41-badb-de1004397637\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" Mar 13 10:56:59.397603 master-0 kubenswrapper[33013]: I0313 10:56:59.397536 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:56:59.397603 master-0 kubenswrapper[33013]: I0313 10:56:59.397551 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/070b85a0-f076-4750-aa00-dabba401dc75-images\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:56:59.397603 master-0 kubenswrapper[33013]: I0313 10:56:59.397570 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:56:59.397735 master-0 kubenswrapper[33013]: I0313 10:56:59.397678 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:56:59.397735 master-0 kubenswrapper[33013]: I0313 10:56:59.397713 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:56:59.397827 master-0 kubenswrapper[33013]: I0313 10:56:59.397742 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:56:59.397827 master-0 kubenswrapper[33013]: I0313 10:56:59.397748 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/86774fd7-7c26-4b41-badb-de1004397637-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mq7rm\" (UID: \"86774fd7-7c26-4b41-badb-de1004397637\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" Mar 13 10:56:59.397911 master-0 kubenswrapper[33013]: I0313 10:56:59.397864 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-proxy-tls\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:56:59.398009 master-0 kubenswrapper[33013]: I0313 10:56:59.397988 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-config\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:56:59.398057 master-0 kubenswrapper[33013]: I0313 10:56:59.398026 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:56:59.398057 master-0 kubenswrapper[33013]: I0313 10:56:59.398031 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:56:59.398057 master-0 kubenswrapper[33013]: I0313 10:56:59.398047 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/070b85a0-f076-4750-aa00-dabba401dc75-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:56:59.398727 master-0 kubenswrapper[33013]: I0313 10:56:59.398054 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:56:59.398806 master-0 kubenswrapper[33013]: I0313 10:56:59.398777 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-federate-client-tls\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:59.398847 master-0 kubenswrapper[33013]: I0313 10:56:59.398834 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-webhook-certs\") pod \"multus-admission-controller-7769569c45-rshw5\" (UID: \"14f6e3b2-716c-4392-b3c8-75b2168ccfb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" Mar 13 10:56:59.398877 master-0 kubenswrapper[33013]: I0313 10:56:59.398865 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:56:59.398924 master-0 kubenswrapper[33013]: I0313 10:56:59.398905 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-cert\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:56:59.399003 master-0 kubenswrapper[33013]: I0313 10:56:59.398975 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-webhook-cert\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:56:59.399036 master-0 kubenswrapper[33013]: I0313 10:56:59.399012 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:56:59.399082 master-0 kubenswrapper[33013]: I0313 10:56:59.399051 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5448b59a-b731-45a3-9ded-d25315f597fb-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:56:59.399115 master-0 kubenswrapper[33013]: I0313 10:56:59.399106 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:56:59.399166 master-0 kubenswrapper[33013]: I0313 10:56:59.399146 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-metrics-server-audit-profiles\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:56:59.399290 master-0 kubenswrapper[33013]: I0313 10:56:59.399258 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:56:59.399335 master-0 kubenswrapper[33013]: I0313 10:56:59.399294 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:59.399381 master-0 kubenswrapper[33013]: I0313 10:56:59.399332 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-serving-certs-ca-bundle\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:56:59.399923 master-0 kubenswrapper[33013]: I0313 10:56:59.399887 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-cert\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:56:59.400155 master-0 kubenswrapper[33013]: I0313 10:56:59.399965 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 10:56:59.404989 master-0 kubenswrapper[33013]: I0313 10:56:59.404933 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/48f99840-4d9e-49c5-819e-0bb15493feb5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:56:59.420324 master-0 kubenswrapper[33013]: I0313 10:56:59.420271 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 10:56:59.429026 master-0 kubenswrapper[33013]: I0313 10:56:59.428960 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-config\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:56:59.440146 master-0 kubenswrapper[33013]: I0313 10:56:59.440077 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 13 10:56:59.459748 master-0 kubenswrapper[33013]: I0313 10:56:59.459685 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-fwh6p" Mar 13 10:56:59.485831 master-0 kubenswrapper[33013]: I0313 10:56:59.485771 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 13 10:56:59.488086 master-0 kubenswrapper[33013]: I0313 10:56:59.488028 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:56:59.500765 master-0 kubenswrapper[33013]: I0313 10:56:59.500704 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 10:56:59.507624 master-0 kubenswrapper[33013]: I0313 10:56:59.507546 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48f99840-4d9e-49c5-819e-0bb15493feb5-images\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:56:59.520345 master-0 kubenswrapper[33013]: I0313 10:56:59.520284 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 10:56:59.526328 master-0 kubenswrapper[33013]: I0313 10:56:59.526268 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-apiservice-cert\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:56:59.530726 master-0 kubenswrapper[33013]: I0313 10:56:59.530687 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1edde4bf-4554-4ab2-b588-513ad84a9bae-webhook-cert\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:56:59.539577 master-0 kubenswrapper[33013]: I0313 10:56:59.539514 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 13 10:56:59.547567 master-0 kubenswrapper[33013]: I0313 10:56:59.547517 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c87545aa-11c2-4e6e-8c13-16eeff3be83b-service-ca-bundle\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:56:59.560363 master-0 kubenswrapper[33013]: I0313 10:56:59.560292 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 13 10:56:59.580310 master-0 kubenswrapper[33013]: I0313 10:56:59.580258 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 10:56:59.588531 master-0 kubenswrapper[33013]: I0313 10:56:59.588488 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-proxy-tls\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:56:59.602964 master-0 kubenswrapper[33013]: I0313 10:56:59.602668 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-tf6mr" Mar 13 10:56:59.626209 master-0 kubenswrapper[33013]: I0313 10:56:59.626157 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 10:56:59.630051 master-0 kubenswrapper[33013]: I0313 10:56:59.630016 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:56:59.641153 master-0 kubenswrapper[33013]: I0313 10:56:59.641063 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-6rglb" Mar 13 10:56:59.659618 master-0 kubenswrapper[33013]: I0313 10:56:59.659562 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-jzkqp" Mar 13 10:56:59.680304 master-0 kubenswrapper[33013]: I0313 10:56:59.680274 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-xdq92" Mar 13 10:56:59.699918 master-0 kubenswrapper[33013]: I0313 10:56:59.699879 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 10:56:59.700358 master-0 kubenswrapper[33013]: I0313 10:56:59.700337 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:56:59.719146 master-0 kubenswrapper[33013]: I0313 10:56:59.719096 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 10:56:59.725798 master-0 kubenswrapper[33013]: I0313 10:56:59.725761 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-config\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:56:59.739524 master-0 kubenswrapper[33013]: I0313 10:56:59.739483 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 10:56:59.747260 master-0 kubenswrapper[33013]: I0313 10:56:59.747216 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:56:59.758169 master-0 kubenswrapper[33013]: I0313 10:56:59.758108 33013 request.go:700] Waited for 2.027991089s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Mar 13 10:56:59.760459 master-0 kubenswrapper[33013]: I0313 10:56:59.760431 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 10:56:59.779776 master-0 kubenswrapper[33013]: I0313 10:56:59.779708 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 10:56:59.800289 master-0 kubenswrapper[33013]: I0313 10:56:59.800259 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 10:56:59.806641 master-0 kubenswrapper[33013]: I0313 10:56:59.806611 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-node-bootstrap-token\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:56:59.820313 master-0 kubenswrapper[33013]: I0313 10:56:59.820265 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-5p4h2" Mar 13 10:56:59.839837 master-0 kubenswrapper[33013]: I0313 10:56:59.839802 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 10:56:59.845923 master-0 kubenswrapper[33013]: I0313 10:56:59.845875 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4df756f0-c6b6-4730-842a-7ee9227397ae-certs\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:56:59.861564 master-0 kubenswrapper[33013]: I0313 10:56:59.861437 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-7mc4m" Mar 13 10:56:59.880615 master-0 kubenswrapper[33013]: I0313 10:56:59.880547 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 10:56:59.888021 master-0 kubenswrapper[33013]: I0313 10:56:59.887985 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:56:59.900324 master-0 kubenswrapper[33013]: I0313 10:56:59.900267 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 10:56:59.920055 master-0 kubenswrapper[33013]: I0313 10:56:59.920000 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 10:56:59.930138 master-0 kubenswrapper[33013]: I0313 10:56:59.930061 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:56:59.939868 master-0 kubenswrapper[33013]: I0313 10:56:59.939812 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 10:56:59.961254 master-0 kubenswrapper[33013]: I0313 10:56:59.961210 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 10:56:59.967373 master-0 kubenswrapper[33013]: I0313 10:56:59.967329 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:56:59.979951 master-0 kubenswrapper[33013]: I0313 10:56:59.979904 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 13 10:56:59.989993 master-0 kubenswrapper[33013]: I0313 10:56:59.989956 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:57:00.000248 master-0 kubenswrapper[33013]: I0313 10:57:00.000203 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 13 10:57:00.008119 master-0 kubenswrapper[33013]: I0313 10:57:00.008074 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:57:00.020102 master-0 kubenswrapper[33013]: I0313 10:57:00.020046 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 13 10:57:00.024233 master-0 kubenswrapper[33013]: I0313 10:57:00.024205 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:57:00.026447 master-0 kubenswrapper[33013]: I0313 10:57:00.026423 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:57:00.027753 master-0 kubenswrapper[33013]: I0313 10:57:00.027706 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5b796628-a6ca-4d5c-9870-0ca60b9372aa-metrics-client-ca\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:57:00.027820 master-0 kubenswrapper[33013]: I0313 10:57:00.027706 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-metrics-client-ca\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:57:00.030249 master-0 kubenswrapper[33013]: I0313 10:57:00.030209 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5448b59a-b731-45a3-9ded-d25315f597fb-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:57:00.045132 master-0 kubenswrapper[33013]: I0313 10:57:00.045070 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 13 10:57:00.045643 master-0 kubenswrapper[33013]: I0313 10:57:00.045576 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:57:00.059632 master-0 kubenswrapper[33013]: I0313 10:57:00.059555 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-4xgf9" Mar 13 10:57:00.081002 master-0 kubenswrapper[33013]: I0313 10:57:00.080947 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-45xkz" Mar 13 10:57:00.101217 master-0 kubenswrapper[33013]: I0313 10:57:00.101153 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 13 10:57:00.107817 master-0 kubenswrapper[33013]: I0313 10:57:00.107767 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5448b59a-b731-45a3-9ded-d25315f597fb-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:57:00.120520 master-0 kubenswrapper[33013]: I0313 10:57:00.120416 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 13 10:57:00.127889 master-0 kubenswrapper[33013]: I0313 10:57:00.127834 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-tls\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:57:00.139663 master-0 kubenswrapper[33013]: I0313 10:57:00.139616 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9b2d2" Mar 13 10:57:00.161442 master-0 kubenswrapper[33013]: I0313 10:57:00.161396 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-ncbcz" Mar 13 10:57:00.180615 master-0 kubenswrapper[33013]: I0313 10:57:00.180536 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 13 10:57:00.188883 master-0 kubenswrapper[33013]: I0313 10:57:00.188844 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:57:00.200127 master-0 kubenswrapper[33013]: I0313 10:57:00.200075 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 13 10:57:00.211149 master-0 kubenswrapper[33013]: I0313 10:57:00.211103 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-metrics-server-audit-profiles\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:57:00.219721 master-0 kubenswrapper[33013]: I0313 10:57:00.219672 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-jkx4c" Mar 13 10:57:00.241841 master-0 kubenswrapper[33013]: I0313 10:57:00.241790 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 13 10:57:00.247929 master-0 kubenswrapper[33013]: I0313 10:57:00.247881 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:57:00.260216 master-0 kubenswrapper[33013]: I0313 10:57:00.260151 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 13 10:57:00.266228 master-0 kubenswrapper[33013]: I0313 10:57:00.266179 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-client-certs\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:57:00.279705 master-0 kubenswrapper[33013]: I0313 10:57:00.279655 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7mk1tpvcusf46" Mar 13 10:57:00.280304 master-0 kubenswrapper[33013]: I0313 10:57:00.280251 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:57:00.300486 master-0 kubenswrapper[33013]: I0313 10:57:00.300405 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 13 10:57:00.306764 master-0 kubenswrapper[33013]: I0313 10:57:00.306710 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5b796628-a6ca-4d5c-9870-0ca60b9372aa-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:57:00.320841 master-0 kubenswrapper[33013]: I0313 10:57:00.320763 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 13 10:57:00.327275 master-0 kubenswrapper[33013]: I0313 10:57:00.327211 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:57:00.340122 master-0 kubenswrapper[33013]: I0313 10:57:00.340028 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 13 10:57:00.349058 master-0 kubenswrapper[33013]: I0313 10:57:00.348999 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:57:00.360155 master-0 kubenswrapper[33013]: I0313 10:57:00.360080 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-27hpj" Mar 13 10:57:00.380085 master-0 kubenswrapper[33013]: I0313 10:57:00.379943 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 10:57:00.394474 master-0 kubenswrapper[33013]: E0313 10:57:00.394411 33013 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.394691 master-0 kubenswrapper[33013]: E0313 10:57:00.394509 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert podName:c09f42db-e6d7-469d-9761-88a879f6aa6b nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.394489261 +0000 UTC m=+4.870442610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert") pod "route-controller-manager-7d9bd68fd6-lwnzl" (UID: "c09f42db-e6d7-469d-9761-88a879f6aa6b") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.394691 master-0 kubenswrapper[33013]: E0313 10:57:00.394418 33013 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:57:00.394691 master-0 kubenswrapper[33013]: E0313 10:57:00.394666 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca podName:c09f42db-e6d7-469d-9761-88a879f6aa6b nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.394642736 +0000 UTC m=+4.870596265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca") pod "route-controller-manager-7d9bd68fd6-lwnzl" (UID: "c09f42db-e6d7-469d-9761-88a879f6aa6b") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:57:00.396495 master-0 kubenswrapper[33013]: E0313 10:57:00.396459 33013 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:57:00.396568 master-0 kubenswrapper[33013]: E0313 10:57:00.396504 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config podName:a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.396494728 +0000 UTC m=+4.872448077 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config") pod "controller-manager-867876d6b6-tpq67" (UID: "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:57:00.396568 master-0 kubenswrapper[33013]: E0313 10:57:00.396514 33013 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.396568 master-0 kubenswrapper[33013]: E0313 10:57:00.396526 33013 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.396568 master-0 kubenswrapper[33013]: E0313 10:57:00.396543 33013 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.396769 master-0 kubenswrapper[33013]: E0313 10:57:00.396575 33013 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.396769 master-0 kubenswrapper[33013]: E0313 10:57:00.396555 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-telemeter-client-tls podName:939a3da3-62e7-4376-853d-dc333465446c nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.396549149 +0000 UTC m=+4.872502498 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-telemeter-client-tls") pod "telemeter-client-6745c97c48-85rlf" (UID: "939a3da3-62e7-4376-853d-dc333465446c") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.396769 master-0 kubenswrapper[33013]: E0313 10:57:00.396562 33013 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:57:00.396769 master-0 kubenswrapper[33013]: E0313 10:57:00.396666 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-cert podName:05a72a4c-5ce8-49d1-8e4f-334f63d4e987 nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.396647372 +0000 UTC m=+4.872600721 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-cert") pod "ingress-canary-dxhl9" (UID: "05a72a4c-5ce8-49d1-8e4f-334f63d4e987") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.396769 master-0 kubenswrapper[33013]: E0313 10:57:00.396719 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert podName:a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.396700273 +0000 UTC m=+4.872653622 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert") pod "controller-manager-867876d6b6-tpq67" (UID: "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.396923 master-0 kubenswrapper[33013]: E0313 10:57:00.396804 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client podName:939a3da3-62e7-4376-853d-dc333465446c nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.396727364 +0000 UTC m=+4.872680713 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client") pod "telemeter-client-6745c97c48-85rlf" (UID: "939a3da3-62e7-4376-853d-dc333465446c") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.396923 master-0 kubenswrapper[33013]: E0313 10:57:00.396831 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-telemeter-trusted-ca-bundle podName:939a3da3-62e7-4376-853d-dc333465446c nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.396825947 +0000 UTC m=+4.872779296 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-telemeter-trusted-ca-bundle") pod "telemeter-client-6745c97c48-85rlf" (UID: "939a3da3-62e7-4376-853d-dc333465446c") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:57:00.397656 master-0 kubenswrapper[33013]: E0313 10:57:00.397632 33013 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.397711 master-0 kubenswrapper[33013]: E0313 10:57:00.397682 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.397671181 +0000 UTC m=+4.873624530 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.397711 master-0 kubenswrapper[33013]: E0313 10:57:00.397680 33013 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:57:00.397770 master-0 kubenswrapper[33013]: E0313 10:57:00.397736 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config podName:c09f42db-e6d7-469d-9761-88a879f6aa6b nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.397726672 +0000 UTC m=+4.873680021 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config") pod "route-controller-manager-7d9bd68fd6-lwnzl" (UID: "c09f42db-e6d7-469d-9761-88a879f6aa6b") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:57:00.398856 master-0 kubenswrapper[33013]: E0313 10:57:00.398814 33013 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:57:00.398931 master-0 kubenswrapper[33013]: E0313 10:57:00.398902 33013 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:57:00.398985 master-0 kubenswrapper[33013]: E0313 10:57:00.398907 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca podName:a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.398888215 +0000 UTC m=+4.874841564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca") pod "controller-manager-867876d6b6-tpq67" (UID: "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:57:00.399029 master-0 kubenswrapper[33013]: E0313 10:57:00.399001 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles podName:a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.398981937 +0000 UTC m=+4.874935476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles") pod "controller-manager-867876d6b6-tpq67" (UID: "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:57:00.400107 master-0 kubenswrapper[33013]: E0313 10:57:00.400083 33013 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.400163 master-0 kubenswrapper[33013]: E0313 10:57:00.400132 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client-kube-rbac-proxy-config podName:939a3da3-62e7-4376-853d-dc333465446c nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.400121379 +0000 UTC m=+4.876074728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6745c97c48-85rlf" (UID: "939a3da3-62e7-4376-853d-dc333465446c") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.400163 master-0 kubenswrapper[33013]: E0313 10:57:00.400130 33013 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.400163 master-0 kubenswrapper[33013]: E0313 10:57:00.400144 33013 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 13 10:57:00.400251 master-0 kubenswrapper[33013]: E0313 10:57:00.400157 33013 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.400251 master-0 kubenswrapper[33013]: E0313 10:57:00.400168 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-federate-client-tls podName:939a3da3-62e7-4376-853d-dc333465446c nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.40016254 +0000 UTC m=+4.876115889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-federate-client-tls") pod "telemeter-client-6745c97c48-85rlf" (UID: "939a3da3-62e7-4376-853d-dc333465446c") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.400251 master-0 kubenswrapper[33013]: E0313 10:57:00.400233 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-serving-certs-ca-bundle podName:939a3da3-62e7-4376-853d-dc333465446c nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.400216552 +0000 UTC m=+4.876169901 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-serving-certs-ca-bundle") pod "telemeter-client-6745c97c48-85rlf" (UID: "939a3da3-62e7-4376-853d-dc333465446c") : failed to sync configmap cache: timed out waiting for the condition Mar 13 10:57:00.400251 master-0 kubenswrapper[33013]: E0313 10:57:00.400247 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-webhook-certs podName:14f6e3b2-716c-4392-b3c8-75b2168ccfb7 nodeName:}" failed. No retries permitted until 2026-03-13 10:57:01.400240493 +0000 UTC m=+4.876193842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-webhook-certs") pod "multus-admission-controller-7769569c45-rshw5" (UID: "14f6e3b2-716c-4392-b3c8-75b2168ccfb7") : failed to sync secret cache: timed out waiting for the condition Mar 13 10:57:00.402022 master-0 kubenswrapper[33013]: I0313 10:57:00.401994 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 10:57:00.419806 master-0 kubenswrapper[33013]: I0313 10:57:00.419749 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 10:57:00.439620 master-0 kubenswrapper[33013]: I0313 10:57:00.439527 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 10:57:00.460193 master-0 kubenswrapper[33013]: I0313 10:57:00.460142 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-fn5mm" Mar 13 10:57:00.479081 master-0 kubenswrapper[33013]: I0313 10:57:00.479035 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 10:57:00.499829 master-0 kubenswrapper[33013]: I0313 10:57:00.499781 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 10:57:00.521918 master-0 kubenswrapper[33013]: I0313 10:57:00.521863 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 10:57:00.543152 master-0 kubenswrapper[33013]: I0313 10:57:00.543100 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 10:57:00.559829 master-0 kubenswrapper[33013]: I0313 10:57:00.559783 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-rxbss" Mar 13 10:57:00.579486 master-0 kubenswrapper[33013]: I0313 10:57:00.579438 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 10:57:00.600808 master-0 kubenswrapper[33013]: I0313 10:57:00.600208 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 13 10:57:00.620629 master-0 kubenswrapper[33013]: I0313 10:57:00.620563 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 13 10:57:00.648196 master-0 kubenswrapper[33013]: I0313 10:57:00.648019 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-98k6z" Mar 13 10:57:00.662226 master-0 kubenswrapper[33013]: I0313 10:57:00.662185 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 13 10:57:00.679749 master-0 kubenswrapper[33013]: I0313 10:57:00.679693 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 13 10:57:00.701713 master-0 kubenswrapper[33013]: I0313 10:57:00.701661 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 13 10:57:00.719231 master-0 kubenswrapper[33013]: I0313 10:57:00.719177 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 10:57:00.741173 master-0 kubenswrapper[33013]: I0313 10:57:00.741106 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-l5tkf" Mar 13 10:57:00.758665 master-0 kubenswrapper[33013]: I0313 10:57:00.758532 33013 request.go:700] Waited for 3.013815564s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dtelemeter-client-serving-certs-ca-bundle&limit=500&resourceVersion=0 Mar 13 10:57:00.760119 master-0 kubenswrapper[33013]: I0313 10:57:00.760072 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 13 10:57:00.785694 master-0 kubenswrapper[33013]: I0313 10:57:00.785639 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 13 10:57:00.800192 master-0 kubenswrapper[33013]: I0313 10:57:00.800139 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 10:57:00.827844 master-0 kubenswrapper[33013]: I0313 10:57:00.827792 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 10:57:00.839457 master-0 kubenswrapper[33013]: I0313 10:57:00.839409 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 10:57:00.859795 master-0 kubenswrapper[33013]: I0313 10:57:00.859742 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 10:57:00.879221 master-0 kubenswrapper[33013]: I0313 10:57:00.879172 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 10:57:00.945267 master-0 kubenswrapper[33013]: I0313 10:57:00.945144 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p29zg\" (UniqueName: \"kubernetes.io/projected/a1a998af-4fc0-4078-a6a0-93dde6c00508-kube-api-access-p29zg\") pod \"kube-storage-version-migrator-operator-7f65c457f5-j7lxv\" (UID: \"a1a998af-4fc0-4078-a6a0-93dde6c00508\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-j7lxv" Mar 13 10:57:00.965020 master-0 kubenswrapper[33013]: I0313 10:57:00.964976 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjk5l\" (UniqueName: \"kubernetes.io/projected/6ed47c57-533f-43e4-88eb-07da29b4878f-kube-api-access-rjk5l\") pod \"openshift-config-operator-64488f9d78-mvfgh\" (UID: \"6ed47c57-533f-43e4-88eb-07da29b4878f\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:57:00.985543 master-0 kubenswrapper[33013]: I0313 10:57:00.985483 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6xlb\" (UniqueName: \"kubernetes.io/projected/4d5479f3-51ec-4b93-8188-21cdda44828d-kube-api-access-j6xlb\") pod \"cluster-monitoring-operator-674cbfbd9d-vk9qz\" (UID: \"4d5479f3-51ec-4b93-8188-21cdda44828d\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-vk9qz" Mar 13 10:57:01.004019 master-0 kubenswrapper[33013]: I0313 10:57:01.003964 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg69z\" (UniqueName: \"kubernetes.io/projected/1c12a5d5-711f-4663-974c-c4b06e15fc39-kube-api-access-cg69z\") pod \"ovnkube-control-plane-66b55d57d-zwdrn\" (UID: \"1c12a5d5-711f-4663-974c-c4b06e15fc39\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-zwdrn" Mar 13 10:57:01.024634 master-0 kubenswrapper[33013]: I0313 10:57:01.023556 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56qz6\" (UniqueName: \"kubernetes.io/projected/79bb87a4-8834-4c73-834e-356ccc1f7f9b-kube-api-access-56qz6\") pod \"network-metrics-daemon-jz2lp\" (UID: \"79bb87a4-8834-4c73-834e-356ccc1f7f9b\") " pod="openshift-multus/network-metrics-daemon-jz2lp" Mar 13 10:57:01.058440 master-0 kubenswrapper[33013]: I0313 10:57:01.041895 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grplv\" (UniqueName: \"kubernetes.io/projected/574bf255-14b3-40af-b240-2d3abd5b86b8-kube-api-access-grplv\") pod \"etcd-operator-5884b9cd56-df8wr\" (UID: \"574bf255-14b3-40af-b240-2d3abd5b86b8\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-df8wr" Mar 13 10:57:01.071461 master-0 kubenswrapper[33013]: I0313 10:57:01.071405 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbdwm\" (UniqueName: \"kubernetes.io/projected/484e6d0b-d057-4658-8e49-bbe7e6f6ee86-kube-api-access-qbdwm\") pod \"control-plane-machine-set-operator-6686554ddc-hszft\" (UID: \"484e6d0b-d057-4658-8e49-bbe7e6f6ee86\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-hszft" Mar 13 10:57:01.106570 master-0 kubenswrapper[33013]: I0313 10:57:01.106506 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2znn\" (UniqueName: \"kubernetes.io/projected/ec121f87-93ea-468c-a25f-2ec5e7d0e0ee-kube-api-access-s2znn\") pod \"machine-approver-754bdc9f9d-jcn8f\" (UID: \"ec121f87-93ea-468c-a25f-2ec5e7d0e0ee\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jcn8f" Mar 13 10:57:01.106810 master-0 kubenswrapper[33013]: I0313 10:57:01.106778 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cpdn\" (UniqueName: \"kubernetes.io/projected/c455a959-d764-4b4f-a1e0-95c27495dd9d-kube-api-access-2cpdn\") pod \"catalog-operator-7d9c49f57b-2j5jl\" (UID: \"c455a959-d764-4b4f-a1e0-95c27495dd9d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:57:01.127668 master-0 kubenswrapper[33013]: I0313 10:57:01.126744 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpdlr\" (UniqueName: \"kubernetes.io/projected/282bc9ff-1bc0-421b-9cd3-d88d7c5e5303-kube-api-access-lpdlr\") pod \"openshift-controller-manager-operator-8565d84698-nsg74\" (UID: \"282bc9ff-1bc0-421b-9cd3-d88d7c5e5303\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nsg74" Mar 13 10:57:01.137991 master-0 kubenswrapper[33013]: I0313 10:57:01.137937 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btws6\" (UniqueName: \"kubernetes.io/projected/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-kube-api-access-btws6\") pod \"ingress-canary-dxhl9\" (UID: \"05a72a4c-5ce8-49d1-8e4f-334f63d4e987\") " pod="openshift-ingress-canary/ingress-canary-dxhl9" Mar 13 10:57:01.152681 master-0 kubenswrapper[33013]: I0313 10:57:01.152622 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp847\" (UniqueName: \"kubernetes.io/projected/9da11462-a91d-4d02-8614-78b4c5b2f7e2-kube-api-access-hp847\") pod \"cluster-storage-operator-6fbfc8dc8f-fdt9m\" (UID: \"9da11462-a91d-4d02-8614-78b4c5b2f7e2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-fdt9m" Mar 13 10:57:01.172057 master-0 kubenswrapper[33013]: I0313 10:57:01.171995 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh6kl\" (UniqueName: \"kubernetes.io/projected/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-kube-api-access-gh6kl\") pod \"multus-admission-controller-7769569c45-rshw5\" (UID: \"14f6e3b2-716c-4392-b3c8-75b2168ccfb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" Mar 13 10:57:01.200571 master-0 kubenswrapper[33013]: I0313 10:57:01.200410 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq9dl\" (UniqueName: \"kubernetes.io/projected/b12e76f4-b960-4534-90e6-a2cdbecd1728-kube-api-access-xq9dl\") pod \"iptables-alerter-gdjjd\" (UID: \"b12e76f4-b960-4534-90e6-a2cdbecd1728\") " pod="openshift-network-operator/iptables-alerter-gdjjd" Mar 13 10:57:01.218951 master-0 kubenswrapper[33013]: I0313 10:57:01.218902 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp6pp\" (UniqueName: \"kubernetes.io/projected/8a305f45-8689-45a8-8c8b-5954f2c863df-kube-api-access-zp6pp\") pod \"package-server-manager-854648ff6d-d5b45\" (UID: \"8a305f45-8689-45a8-8c8b-5954f2c863df\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:57:01.232221 master-0 kubenswrapper[33013]: I0313 10:57:01.232180 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfxm5\" (UniqueName: \"kubernetes.io/projected/86774fd7-7c26-4b41-badb-de1004397637-kube-api-access-tfxm5\") pod \"cluster-samples-operator-664cb58b85-mq7rm\" (UID: \"86774fd7-7c26-4b41-badb-de1004397637\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mq7rm" Mar 13 10:57:01.264579 master-0 kubenswrapper[33013]: I0313 10:57:01.264529 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxvqn\" (UniqueName: \"kubernetes.io/projected/b9624a9a-68dd-4cc1-a0a4-23fe297ceba3-kube-api-access-vxvqn\") pod \"ovnkube-node-hztqp\" (UID: \"b9624a9a-68dd-4cc1-a0a4-23fe297ceba3\") " pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:57:01.273557 master-0 kubenswrapper[33013]: I0313 10:57:01.273516 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzxzs\" (UniqueName: \"kubernetes.io/projected/9d8af021-f20f-48a2-8b2a-3a5a3f37237f-kube-api-access-dzxzs\") pod \"prometheus-operator-5ff8674d55-nqnlp\" (UID: \"9d8af021-f20f-48a2-8b2a-3a5a3f37237f\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-nqnlp" Mar 13 10:57:01.299062 master-0 kubenswrapper[33013]: I0313 10:57:01.298980 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwfzq\" (UniqueName: \"kubernetes.io/projected/c87545aa-11c2-4e6e-8c13-16eeff3be83b-kube-api-access-pwfzq\") pod \"insights-operator-8f89dfddd-nhsd9\" (UID: \"c87545aa-11c2-4e6e-8c13-16eeff3be83b\") " pod="openshift-insights/insights-operator-8f89dfddd-nhsd9" Mar 13 10:57:01.311551 master-0 kubenswrapper[33013]: I0313 10:57:01.311459 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j25nl\" (UniqueName: \"kubernetes.io/projected/bfbaa57e-adac-48f8-8182-b4fdb42fbb9c-kube-api-access-j25nl\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x\" (UID: \"bfbaa57e-adac-48f8-8182-b4fdb42fbb9c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-t4g2x" Mar 13 10:57:01.338566 master-0 kubenswrapper[33013]: I0313 10:57:01.338512 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48nns\" (UniqueName: \"kubernetes.io/projected/5b796628-a6ca-4d5c-9870-0ca60b9372aa-kube-api-access-48nns\") pod \"node-exporter-mtcsw\" (UID: \"5b796628-a6ca-4d5c-9870-0ca60b9372aa\") " pod="openshift-monitoring/node-exporter-mtcsw" Mar 13 10:57:01.363195 master-0 kubenswrapper[33013]: I0313 10:57:01.363141 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5rht\" (UniqueName: \"kubernetes.io/projected/b8d40b37-0f3d-4531-9fa8-eda965d2337d-kube-api-access-l5rht\") pod \"cluster-olm-operator-77899cf6d-kh9h2\" (UID: \"b8d40b37-0f3d-4531-9fa8-eda965d2337d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kh9h2" Mar 13 10:57:01.397317 master-0 kubenswrapper[33013]: I0313 10:57:01.397249 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwqp6\" (UniqueName: \"kubernetes.io/projected/549bd192-0235-4994-b485-f1b10d16f6b5-kube-api-access-pwqp6\") pod \"service-ca-84bfdbbb7f-l8h7l\" (UID: \"549bd192-0235-4994-b485-f1b10d16f6b5\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-l8h7l" Mar 13 10:57:01.399279 master-0 kubenswrapper[33013]: I0313 10:57:01.398334 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rfpp\" (UniqueName: \"kubernetes.io/projected/1d5f5440-b10c-40ea-9f1a-5f03babc1bd9-kube-api-access-8rfpp\") pod \"network-operator-7c649bf6d4-6vpl4\" (UID: \"1d5f5440-b10c-40ea-9f1a-5f03babc1bd9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-6vpl4" Mar 13 10:57:01.430616 master-0 kubenswrapper[33013]: I0313 10:57:01.430530 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk4sg\" (UniqueName: \"kubernetes.io/projected/f87662b9-6ac6-44f3-8a16-ff858c2baa91-kube-api-access-zk4sg\") pod \"network-node-identity-9z8mk\" (UID: \"f87662b9-6ac6-44f3-8a16-ff858c2baa91\") " pod="openshift-network-node-identity/network-node-identity-9z8mk" Mar 13 10:57:01.435411 master-0 kubenswrapper[33013]: I0313 10:57:01.435352 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgb25\" (UniqueName: \"kubernetes.io/projected/11927952-723f-4d6d-922b-73139abe8877-kube-api-access-kgb25\") pod \"dns-default-zc596\" (UID: \"11927952-723f-4d6d-922b-73139abe8877\") " pod="openshift-dns/dns-default-zc596" Mar 13 10:57:01.440523 master-0 kubenswrapper[33013]: I0313 10:57:01.440456 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:57:01.440700 master-0 kubenswrapper[33013]: I0313 10:57:01.440544 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:57:01.440700 master-0 kubenswrapper[33013]: I0313 10:57:01.440633 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:57:01.440700 master-0 kubenswrapper[33013]: I0313 10:57:01.440653 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:57:01.440700 master-0 kubenswrapper[33013]: I0313 10:57:01.440676 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-federate-client-tls\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:57:01.440700 master-0 kubenswrapper[33013]: I0313 10:57:01.440700 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-webhook-certs\") pod \"multus-admission-controller-7769569c45-rshw5\" (UID: \"14f6e3b2-716c-4392-b3c8-75b2168ccfb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" Mar 13 10:57:01.440979 master-0 kubenswrapper[33013]: I0313 10:57:01.440912 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:57:01.440979 master-0 kubenswrapper[33013]: I0313 10:57:01.440943 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:57:01.440979 master-0 kubenswrapper[33013]: I0313 10:57:01.440910 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:57:01.441123 master-0 kubenswrapper[33013]: I0313 10:57:01.441003 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-federate-client-tls\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:57:01.441123 master-0 kubenswrapper[33013]: I0313 10:57:01.441033 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:57:01.441123 master-0 kubenswrapper[33013]: I0313 10:57:01.441066 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-serving-certs-ca-bundle\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:57:01.441123 master-0 kubenswrapper[33013]: I0313 10:57:01.441076 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:57:01.441316 master-0 kubenswrapper[33013]: I0313 10:57:01.441222 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:57:01.441316 master-0 kubenswrapper[33013]: I0313 10:57:01.441235 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/14f6e3b2-716c-4392-b3c8-75b2168ccfb7-webhook-certs\") pod \"multus-admission-controller-7769569c45-rshw5\" (UID: \"14f6e3b2-716c-4392-b3c8-75b2168ccfb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-rshw5" Mar 13 10:57:01.441316 master-0 kubenswrapper[33013]: I0313 10:57:01.441226 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:57:01.441316 master-0 kubenswrapper[33013]: I0313 10:57:01.441274 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-serving-certs-ca-bundle\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:57:01.441316 master-0 kubenswrapper[33013]: I0313 10:57:01.441293 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:57:01.441514 master-0 kubenswrapper[33013]: I0313 10:57:01.441398 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:57:01.441553 master-0 kubenswrapper[33013]: I0313 10:57:01.441525 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:57:01.441595 master-0 kubenswrapper[33013]: I0313 10:57:01.441546 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:57:01.441595 master-0 kubenswrapper[33013]: I0313 10:57:01.441557 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-telemeter-client-tls\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:57:01.441734 master-0 kubenswrapper[33013]: I0313 10:57:01.441717 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:57:01.441783 master-0 kubenswrapper[33013]: I0313 10:57:01.441747 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-cert\") pod \"ingress-canary-dxhl9\" (UID: \"05a72a4c-5ce8-49d1-8e4f-334f63d4e987\") " pod="openshift-ingress-canary/ingress-canary-dxhl9" Mar 13 10:57:01.441886 master-0 kubenswrapper[33013]: I0313 10:57:01.441858 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a3da3-62e7-4376-853d-dc333465446c-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:57:01.441926 master-0 kubenswrapper[33013]: I0313 10:57:01.441887 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-telemeter-client-tls\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:57:01.441956 master-0 kubenswrapper[33013]: I0313 10:57:01.441935 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:57:01.441993 master-0 kubenswrapper[33013]: I0313 10:57:01.441967 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/05a72a4c-5ce8-49d1-8e4f-334f63d4e987-cert\") pod \"ingress-canary-dxhl9\" (UID: \"05a72a4c-5ce8-49d1-8e4f-334f63d4e987\") " pod="openshift-ingress-canary/ingress-canary-dxhl9" Mar 13 10:57:01.441993 master-0 kubenswrapper[33013]: I0313 10:57:01.441983 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:57:01.442176 master-0 kubenswrapper[33013]: I0313 10:57:01.442144 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:57:01.442218 master-0 kubenswrapper[33013]: I0313 10:57:01.442182 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/939a3da3-62e7-4376-853d-dc333465446c-secret-telemeter-client\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:57:01.442248 master-0 kubenswrapper[33013]: I0313 10:57:01.442227 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:57:01.456456 master-0 kubenswrapper[33013]: I0313 10:57:01.456329 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26dtr\" (UniqueName: \"kubernetes.io/projected/d0f42a72-24c7-49e6-8edb-97b2b0d6183a-kube-api-access-26dtr\") pod \"machine-config-operator-fdb5c78b5-s4fhs\" (UID: \"d0f42a72-24c7-49e6-8edb-97b2b0d6183a\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-s4fhs" Mar 13 10:57:01.474130 master-0 kubenswrapper[33013]: I0313 10:57:01.474049 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrlx\" (UniqueName: \"kubernetes.io/projected/866cf034-8fd8-4f16-8e9b-68627228aa8d-kube-api-access-mnrlx\") pod \"csi-snapshot-controller-operator-5685fbc7d-mfvmx\" (UID: \"866cf034-8fd8-4f16-8e9b-68627228aa8d\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-mfvmx" Mar 13 10:57:01.495365 master-0 kubenswrapper[33013]: I0313 10:57:01.495317 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r657p\" (UniqueName: \"kubernetes.io/projected/2195f7be-b41e-4ae2-b737-d5782e0d41a8-kube-api-access-r657p\") pod \"network-check-source-7c67b67d47-jbx9v\" (UID: \"2195f7be-b41e-4ae2-b737-d5782e0d41a8\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-jbx9v" Mar 13 10:57:01.510491 master-0 kubenswrapper[33013]: I0313 10:57:01.510454 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdb2x\" (UniqueName: \"kubernetes.io/projected/2a05e72d-836f-40e0-8a5c-ee02dce494b3-kube-api-access-qdb2x\") pod \"redhat-marketplace-mrztj\" (UID: \"2a05e72d-836f-40e0-8a5c-ee02dce494b3\") " pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:57:01.533010 master-0 kubenswrapper[33013]: I0313 10:57:01.532973 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5hq9\" (UniqueName: \"kubernetes.io/projected/b68ed803-45e2-42f1-99b1-33cf59b01d74-kube-api-access-q5hq9\") pod \"metrics-server-68597ccc5b-xrb8c\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:57:01.555719 master-0 kubenswrapper[33013]: I0313 10:57:01.555672 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d84xk\" (UniqueName: \"kubernetes.io/projected/2afe3890-e844-4dd3-ba49-3ac9178549bf-kube-api-access-d84xk\") pod \"olm-operator-d64cfc9db-rsl2h\" (UID: \"2afe3890-e844-4dd3-ba49-3ac9178549bf\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:57:01.571804 master-0 kubenswrapper[33013]: I0313 10:57:01.571759 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k8rp\" (UniqueName: \"kubernetes.io/projected/d288e5d0-0976-477f-be14-b3d5828e0482-kube-api-access-5k8rp\") pod \"migrator-57ccdf9b5-fgvbv\" (UID: \"d288e5d0-0976-477f-be14-b3d5828e0482\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-fgvbv" Mar 13 10:57:01.591105 master-0 kubenswrapper[33013]: I0313 10:57:01.591058 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt6sd\" (UniqueName: \"kubernetes.io/projected/5aa507cf-017d-44f5-8662-77547f82fb51-kube-api-access-jt6sd\") pod \"community-operators-vr4ts\" (UID: \"5aa507cf-017d-44f5-8662-77547f82fb51\") " pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:57:01.612879 master-0 kubenswrapper[33013]: I0313 10:57:01.612833 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86ae8cb8-72b3-4be6-9feb-ee0c0da42dba-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-vqdk8\" (UID: \"86ae8cb8-72b3-4be6-9feb-ee0c0da42dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-vqdk8" Mar 13 10:57:01.636388 master-0 kubenswrapper[33013]: I0313 10:57:01.636347 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjvtr\" (UniqueName: \"kubernetes.io/projected/9aa4b44d-f202-4670-afab-44b38960026f-kube-api-access-bjvtr\") pod \"multus-qng6t\" (UID: \"9aa4b44d-f202-4670-afab-44b38960026f\") " pod="openshift-multus/multus-qng6t" Mar 13 10:57:01.651192 master-0 kubenswrapper[33013]: I0313 10:57:01.651149 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22bwx\" (UniqueName: \"kubernetes.io/projected/5ed5e77b-948b-4d94-ac9f-440ee3c07e18-kube-api-access-22bwx\") pod \"openshift-apiserver-operator-799b6db4d7-sdg4w\" (UID: \"5ed5e77b-948b-4d94-ac9f-440ee3c07e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-sdg4w" Mar 13 10:57:01.674470 master-0 kubenswrapper[33013]: I0313 10:57:01.674426 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqjkf\" (UniqueName: \"kubernetes.io/projected/1434c4a2-5c4d-478a-a16a-7d6a52ea3099-kube-api-access-qqjkf\") pod \"authentication-operator-7c6989d6c4-cwlxw\" (UID: \"1434c4a2-5c4d-478a-a16a-7d6a52ea3099\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-cwlxw" Mar 13 10:57:01.691936 master-0 kubenswrapper[33013]: I0313 10:57:01.691895 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8q5s\" (UniqueName: \"kubernetes.io/projected/beee81ef-5a3a-4df2-85d5-2573679d261f-kube-api-access-f8q5s\") pod \"redhat-operators-jdzpd\" (UID: \"beee81ef-5a3a-4df2-85d5-2573679d261f\") " pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:57:01.710838 master-0 kubenswrapper[33013]: I0313 10:57:01.710703 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8kvd\" (UniqueName: \"kubernetes.io/projected/5448b59a-b731-45a3-9ded-d25315f597fb-kube-api-access-d8kvd\") pod \"openshift-state-metrics-74cc79fd76-jxrlm\" (UID: \"5448b59a-b731-45a3-9ded-d25315f597fb\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jxrlm" Mar 13 10:57:01.731451 master-0 kubenswrapper[33013]: I0313 10:57:01.731405 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5656\" (UniqueName: \"kubernetes.io/projected/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-kube-api-access-f5656\") pod \"controller-manager-867876d6b6-tpq67\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:57:01.752436 master-0 kubenswrapper[33013]: I0313 10:57:01.752405 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd2mn\" (UniqueName: \"kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-kube-api-access-qd2mn\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:57:01.776346 master-0 kubenswrapper[33013]: I0313 10:57:01.776305 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt7hs\" (UniqueName: \"kubernetes.io/projected/d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33-kube-api-access-bt7hs\") pod \"cluster-autoscaler-operator-69576476f7-pzjxd\" (UID: \"d9c4a7b4-28f2-4dcb-bdba-e23a67b79c33\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-pzjxd" Mar 13 10:57:01.778103 master-0 kubenswrapper[33013]: I0313 10:57:01.778059 33013 request.go:700] Waited for 3.940286227s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token Mar 13 10:57:01.794264 master-0 kubenswrapper[33013]: I0313 10:57:01.794202 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c4rc\" (UniqueName: \"kubernetes.io/projected/3ff2ab1c-7057-4e18-8e32-68807f86532a-kube-api-access-8c4rc\") pod \"dns-operator-589895fbb7-wjrpm\" (UID: \"3ff2ab1c-7057-4e18-8e32-68807f86532a\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wjrpm" Mar 13 10:57:01.813168 master-0 kubenswrapper[33013]: I0313 10:57:01.813113 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f9db15a-8854-485b-9863-9cbe5dddd977-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpslh\" (UID: \"8f9db15a-8854-485b-9863-9cbe5dddd977\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpslh" Mar 13 10:57:01.831534 master-0 kubenswrapper[33013]: I0313 10:57:01.831478 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdg6f\" (UniqueName: \"kubernetes.io/projected/2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8-kube-api-access-qdg6f\") pod \"kube-state-metrics-68b88f8cb5-2n8dn\" (UID: \"2d07f6b0-1f25-4d9f-af02-5449f2d6e7b8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-2n8dn" Mar 13 10:57:01.855025 master-0 kubenswrapper[33013]: I0313 10:57:01.854955 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvvhh\" (UniqueName: \"kubernetes.io/projected/4a1b43c4-55b9-4c72-ba7c-9089bf28cf16-kube-api-access-rvvhh\") pod \"certified-operators-bgvrc\" (UID: \"4a1b43c4-55b9-4c72-ba7c-9089bf28cf16\") " pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:57:01.870536 master-0 kubenswrapper[33013]: I0313 10:57:01.870492 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqg6g\" (UniqueName: \"kubernetes.io/projected/6622be09-206e-4d02-90ca-6d9f2fc852aa-kube-api-access-lqg6g\") pod \"csi-snapshot-controller-7577d6f48-cbhxt\" (UID: \"6622be09-206e-4d02-90ca-6d9f2fc852aa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-cbhxt" Mar 13 10:57:01.892160 master-0 kubenswrapper[33013]: I0313 10:57:01.892110 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxkl8\" (UniqueName: \"kubernetes.io/projected/1edde4bf-4554-4ab2-b588-513ad84a9bae-kube-api-access-kxkl8\") pod \"packageserver-7b564dfc5b-qc9cq\" (UID: \"1edde4bf-4554-4ab2-b588-513ad84a9bae\") " pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:57:01.911092 master-0 kubenswrapper[33013]: I0313 10:57:01.911011 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn8w5\" (UniqueName: \"kubernetes.io/projected/4e6ecc16-19cb-4b66-801f-b958b10d0ce7-kube-api-access-gn8w5\") pod \"cloud-credential-operator-55d85b7b47-t8ll8\" (UID: \"4e6ecc16-19cb-4b66-801f-b958b10d0ce7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-t8ll8" Mar 13 10:57:01.932440 master-0 kubenswrapper[33013]: I0313 10:57:01.932363 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcb99\" (UniqueName: \"kubernetes.io/projected/c09f42db-e6d7-469d-9761-88a879f6aa6b-kube-api-access-mcb99\") pod \"route-controller-manager-7d9bd68fd6-lwnzl\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:57:01.961357 master-0 kubenswrapper[33013]: I0313 10:57:01.960874 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec3168fc-6c8f-4603-94e0-17b1ae22a802-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-px9bl\" (UID: \"ec3168fc-6c8f-4603-94e0-17b1ae22a802\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-px9bl" Mar 13 10:57:01.971815 master-0 kubenswrapper[33013]: I0313 10:57:01.971726 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knkb7\" (UniqueName: \"kubernetes.io/projected/66f49a19-0e3b-4611-b8a6-5f5687fa20b6-kube-api-access-knkb7\") pod \"marketplace-operator-64bf9778cb-85x6d\" (UID: \"66f49a19-0e3b-4611-b8a6-5f5687fa20b6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:57:01.993893 master-0 kubenswrapper[33013]: I0313 10:57:01.993822 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzv5v\" (UniqueName: \"kubernetes.io/projected/257a4a8b-014c-4473-80a0-e95cf6d41bf1-kube-api-access-hzv5v\") pod \"catalogd-controller-manager-7f8b8b6f4c-f46qd\" (UID: \"257a4a8b-014c-4473-80a0-e95cf6d41bf1\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:57:02.013011 master-0 kubenswrapper[33013]: I0313 10:57:02.012966 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac1a605-d2d5-4004-96f5-121c20555bde-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-xmpst\" (UID: \"0ac1a605-d2d5-4004-96f5-121c20555bde\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xmpst" Mar 13 10:57:02.038304 master-0 kubenswrapper[33013]: I0313 10:57:02.038229 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp2qn\" (UniqueName: \"kubernetes.io/projected/37b2e803-302b-4650-b18f-d3d2dd703bd5-kube-api-access-hp2qn\") pod \"service-ca-operator-69b6fc6b88-lntzv\" (UID: \"37b2e803-302b-4650-b18f-d3d2dd703bd5\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-lntzv" Mar 13 10:57:02.086692 master-0 kubenswrapper[33013]: I0313 10:57:02.086647 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:57:02.097361 master-0 kubenswrapper[33013]: I0313 10:57:02.097308 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9cbm\" (UniqueName: \"kubernetes.io/projected/1d72d950-cfb4-4ed5-9ad6-f7266b937493-kube-api-access-h9cbm\") pod \"apiserver-65bc99cdf7-7rjbr\" (UID: \"1d72d950-cfb4-4ed5-9ad6-f7266b937493\") " pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:57:02.097691 master-0 kubenswrapper[33013]: I0313 10:57:02.097656 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkdfn\" (UniqueName: \"kubernetes.io/projected/eb778c86-ea51-4eab-82b8-a8e0bec0f050-kube-api-access-hkdfn\") pod \"router-default-79f8cd6fdd-b4x54\" (UID: \"eb778c86-ea51-4eab-82b8-a8e0bec0f050\") " pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:57:02.114003 master-0 kubenswrapper[33013]: I0313 10:57:02.113950 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5vcv\" (UniqueName: \"kubernetes.io/projected/8cf9326b-bc23-45c2-82c4-9c08c739ac5a-kube-api-access-m5vcv\") pod \"cluster-image-registry-operator-86d6d77c7c-492v4\" (UID: \"8cf9326b-bc23-45c2-82c4-9c08c739ac5a\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-492v4" Mar 13 10:57:02.140188 master-0 kubenswrapper[33013]: I0313 10:57:02.140131 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjcjm\" (UniqueName: \"kubernetes.io/projected/42b4d53c-af72-44c8-9605-271445f95f87-kube-api-access-kjcjm\") pod \"cluster-node-tuning-operator-66c7586884-9fptc\" (UID: \"42b4d53c-af72-44c8-9605-271445f95f87\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-9fptc" Mar 13 10:57:02.154533 master-0 kubenswrapper[33013]: I0313 10:57:02.154471 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htqw9\" (UniqueName: \"kubernetes.io/projected/d9075a44-22d3-4562-819e-d5a92f013663-kube-api-access-htqw9\") pod \"tuned-7wkqw\" (UID: \"d9075a44-22d3-4562-819e-d5a92f013663\") " pod="openshift-cluster-node-tuning-operator/tuned-7wkqw" Mar 13 10:57:02.171218 master-0 kubenswrapper[33013]: I0313 10:57:02.171171 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6nnz\" (UniqueName: \"kubernetes.io/projected/5843b0d4-a538-4261-b425-598e318c9d07-kube-api-access-r6nnz\") pod \"multus-additional-cni-plugins-mc5nc\" (UID: \"5843b0d4-a538-4261-b425-598e318c9d07\") " pod="openshift-multus/multus-additional-cni-plugins-mc5nc" Mar 13 10:57:02.190901 master-0 kubenswrapper[33013]: I0313 10:57:02.190847 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwc4l\" (UniqueName: \"kubernetes.io/projected/e485e709-32ba-442b-98e5-b4073516c0ab-kube-api-access-qwc4l\") pod \"node-resolver-tfwn8\" (UID: \"e485e709-32ba-442b-98e5-b4073516c0ab\") " pod="openshift-dns/node-resolver-tfwn8" Mar 13 10:57:02.213199 master-0 kubenswrapper[33013]: I0313 10:57:02.213064 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsswm\" (UniqueName: \"kubernetes.io/projected/b10584c2-ef04-4649-bcb6-9222c9530c3f-kube-api-access-zsswm\") pod \"operator-controller-controller-manager-6598bfb6c4-bg6zf\" (UID: \"b10584c2-ef04-4649-bcb6-9222c9530c3f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:57:02.234671 master-0 kubenswrapper[33013]: I0313 10:57:02.234622 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdgld\" (UniqueName: \"kubernetes.io/projected/4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b-kube-api-access-tdgld\") pod \"apiserver-778fb45b4-65f7b\" (UID: \"4217cfa5-f53a-4e23-a3c8-ac77e26dcc7b\") " pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:57:02.252430 master-0 kubenswrapper[33013]: I0313 10:57:02.252386 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg9zz\" (UniqueName: \"kubernetes.io/projected/60e17cd1-c520-4d8d-8c72-47bf73b8cc66-kube-api-access-xg9zz\") pod \"machine-config-daemon-gdfnq\" (UID: \"60e17cd1-c520-4d8d-8c72-47bf73b8cc66\") " pod="openshift-machine-config-operator/machine-config-daemon-gdfnq" Mar 13 10:57:02.275812 master-0 kubenswrapper[33013]: I0313 10:57:02.275766 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2q2f\" (UniqueName: \"kubernetes.io/projected/939a3da3-62e7-4376-853d-dc333465446c-kube-api-access-t2q2f\") pod \"telemeter-client-6745c97c48-85rlf\" (UID: \"939a3da3-62e7-4376-853d-dc333465446c\") " pod="openshift-monitoring/telemeter-client-6745c97c48-85rlf" Mar 13 10:57:02.291659 master-0 kubenswrapper[33013]: I0313 10:57:02.291604 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k4c5\" (UniqueName: \"kubernetes.io/projected/4df756f0-c6b6-4730-842a-7ee9227397ae-kube-api-access-8k4c5\") pod \"machine-config-server-mhk8z\" (UID: \"4df756f0-c6b6-4730-842a-7ee9227397ae\") " pod="openshift-machine-config-operator/machine-config-server-mhk8z" Mar 13 10:57:02.313755 master-0 kubenswrapper[33013]: I0313 10:57:02.313719 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlmhn\" (UniqueName: \"kubernetes.io/projected/070b85a0-f076-4750-aa00-dabba401dc75-kube-api-access-nlmhn\") pod \"cluster-baremetal-operator-5cdb4c5598-gsr52\" (UID: \"070b85a0-f076-4750-aa00-dabba401dc75\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-gsr52" Mar 13 10:57:02.331531 master-0 kubenswrapper[33013]: I0313 10:57:02.331490 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7v6s\" (UniqueName: \"kubernetes.io/projected/26cc0e72-8b4f-4087-89b9-05d2cf6df3f6-kube-api-access-m7v6s\") pod \"machine-config-controller-ff46b7bdf-jtj5g\" (UID: \"26cc0e72-8b4f-4087-89b9-05d2cf6df3f6\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-jtj5g" Mar 13 10:57:02.353121 master-0 kubenswrapper[33013]: I0313 10:57:02.353074 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb5l4\" (UniqueName: \"kubernetes.io/projected/48f99840-4d9e-49c5-819e-0bb15493feb5-kube-api-access-mb5l4\") pod \"machine-api-operator-84bf6db4f9-7h8nz\" (UID: \"48f99840-4d9e-49c5-819e-0bb15493feb5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-7h8nz" Mar 13 10:57:02.371014 master-0 kubenswrapper[33013]: I0313 10:57:02.370963 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7667717b-fb74-456b-8615-16475cb69e98-bound-sa-token\") pod \"ingress-operator-677db989d6-tzd9b\" (UID: \"7667717b-fb74-456b-8615-16475cb69e98\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" Mar 13 10:57:02.391163 master-0 kubenswrapper[33013]: I0313 10:57:02.391124 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gchrx\" (UniqueName: \"kubernetes.io/projected/803de28e-3b31-4ea2-9b97-87a733635a5c-kube-api-access-gchrx\") pod \"network-check-target-96vwf\" (UID: \"803de28e-3b31-4ea2-9b97-87a733635a5c\") " pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:57:02.410770 master-0 kubenswrapper[33013]: E0313 10:57:02.410699 33013 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:02.410770 master-0 kubenswrapper[33013]: E0313 10:57:02.410755 33013 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:02.411038 master-0 kubenswrapper[33013]: E0313 10:57:02.410839 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access podName:533638d2-44ce-4cf8-aa47-a6b89c94621d nodeName:}" failed. No retries permitted until 2026-03-13 10:57:02.910813474 +0000 UTC m=+6.386766823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access") pod "installer-3-master-0" (UID: "533638d2-44ce-4cf8-aa47-a6b89c94621d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:02.426454 master-0 kubenswrapper[33013]: E0313 10:57:02.426404 33013 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:57:02.430389 master-0 kubenswrapper[33013]: E0313 10:57:02.430362 33013 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 10:57:02.446669 master-0 kubenswrapper[33013]: E0313 10:57:02.446616 33013 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:57:02.477102 master-0 kubenswrapper[33013]: E0313 10:57:02.476763 33013 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 13 10:57:02.486801 master-0 kubenswrapper[33013]: E0313 10:57:02.486726 33013 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:57:02.496123 master-0 kubenswrapper[33013]: I0313 10:57:02.496072 33013 scope.go:117] "RemoveContainer" containerID="8f8b390ae6e4a037523aeb2d8c83e0584313e3d0ff96486ce09d9902d1586cb0" Mar 13 10:57:02.507503 master-0 kubenswrapper[33013]: E0313 10:57:02.507453 33013 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:57:02.551770 master-0 kubenswrapper[33013]: E0313 10:57:02.550951 33013 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.839s" Mar 13 10:57:02.563065 master-0 kubenswrapper[33013]: I0313 10:57:02.563024 33013 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 13 10:57:02.600604 master-0 kubenswrapper[33013]: I0313 10:57:02.600549 33013 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 13 10:57:02.600787 master-0 kubenswrapper[33013]: I0313 10:57:02.600669 33013 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 13 10:57:02.621444 master-0 kubenswrapper[33013]: I0313 10:57:02.621400 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 10:57:02.621444 master-0 kubenswrapper[33013]: I0313 10:57:02.621441 33013 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="1018719a-c6e6-4625-9309-9302ae0dfe9b" Mar 13 10:57:02.621644 master-0 kubenswrapper[33013]: I0313 10:57:02.621483 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:57:02.621644 master-0 kubenswrapper[33013]: I0313 10:57:02.621504 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 10:57:02.621644 master-0 kubenswrapper[33013]: I0313 10:57:02.621513 33013 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="1018719a-c6e6-4625-9309-9302ae0dfe9b" Mar 13 10:57:02.621644 master-0 kubenswrapper[33013]: I0313 10:57:02.621526 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:57:02.621644 master-0 kubenswrapper[33013]: I0313 10:57:02.621625 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:57:02.621895 master-0 kubenswrapper[33013]: I0313 10:57:02.621658 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt" Mar 13 10:57:02.621895 master-0 kubenswrapper[33013]: I0313 10:57:02.621724 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:57:02.621895 master-0 kubenswrapper[33013]: I0313 10:57:02.621768 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:57:02.621895 master-0 kubenswrapper[33013]: I0313 10:57:02.621823 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:57:02.622102 master-0 kubenswrapper[33013]: I0313 10:57:02.621897 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-vkqtt" Mar 13 10:57:02.622102 master-0 kubenswrapper[33013]: I0313 10:57:02.622036 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:57:02.622102 master-0 kubenswrapper[33013]: I0313 10:57:02.622059 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:57:02.622102 master-0 kubenswrapper[33013]: I0313 10:57:02.622091 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:57:02.622224 master-0 kubenswrapper[33013]: I0313 10:57:02.622118 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:57:02.622224 master-0 kubenswrapper[33013]: I0313 10:57:02.622145 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-mvfgh" Mar 13 10:57:02.622224 master-0 kubenswrapper[33013]: I0313 10:57:02.622156 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 13 10:57:02.622224 master-0 kubenswrapper[33013]: I0313 10:57:02.622182 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-zc596" Mar 13 10:57:02.622224 master-0 kubenswrapper[33013]: I0313 10:57:02.622203 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zc596" Mar 13 10:57:02.622224 master-0 kubenswrapper[33013]: I0313 10:57:02.622213 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:57:02.622462 master-0 kubenswrapper[33013]: I0313 10:57:02.622317 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:57:02.622462 master-0 kubenswrapper[33013]: I0313 10:57:02.622431 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:57:02.622523 master-0 kubenswrapper[33013]: I0313 10:57:02.622463 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:57:02.622979 master-0 kubenswrapper[33013]: I0313 10:57:02.622899 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:57:02.622979 master-0 kubenswrapper[33013]: I0313 10:57:02.622950 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:57:02.622979 master-0 kubenswrapper[33013]: I0313 10:57:02.622970 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:57:02.796540 master-0 kubenswrapper[33013]: I0313 10:57:02.796492 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:57:02.840439 master-0 kubenswrapper[33013]: I0313 10:57:02.840388 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:57:02.965696 master-0 kubenswrapper[33013]: I0313 10:57:02.965644 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:57:02.965967 master-0 kubenswrapper[33013]: E0313 10:57:02.965810 33013 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:02.965967 master-0 kubenswrapper[33013]: E0313 10:57:02.965851 33013 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:02.965967 master-0 kubenswrapper[33013]: E0313 10:57:02.965919 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access podName:533638d2-44ce-4cf8-aa47-a6b89c94621d nodeName:}" failed. No retries permitted until 2026-03-13 10:57:03.965899444 +0000 UTC m=+7.441852793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access") pod "installer-3-master-0" (UID: "533638d2-44ce-4cf8-aa47-a6b89c94621d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:03.094536 master-0 kubenswrapper[33013]: I0313 10:57:03.094432 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-tzd9b_7667717b-fb74-456b-8615-16475cb69e98/ingress-operator/5.log" Mar 13 10:57:03.095266 master-0 kubenswrapper[33013]: I0313 10:57:03.095237 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-tzd9b" event={"ID":"7667717b-fb74-456b-8615-16475cb69e98","Type":"ContainerStarted","Data":"88dbc82c4ddcda8e52b5b7393ac60efced5e519d53c86bc6a86048340bdca4dd"} Mar 13 10:57:03.095728 master-0 kubenswrapper[33013]: I0313 10:57:03.095705 33013 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:57:03.111640 master-0 kubenswrapper[33013]: I0313 10:57:03.110738 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:57:03.222748 master-0 kubenswrapper[33013]: I0313 10:57:03.222694 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:57:03.269793 master-0 kubenswrapper[33013]: I0313 10:57:03.269739 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:57:03.304761 master-0 kubenswrapper[33013]: I0313 10:57:03.304685 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:57:03.655426 master-0 kubenswrapper[33013]: I0313 10:57:03.655359 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:57:03.659868 master-0 kubenswrapper[33013]: I0313 10:57:03.659820 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-79f8cd6fdd-b4x54" Mar 13 10:57:03.850718 master-0 kubenswrapper[33013]: I0313 10:57:03.850621 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=6.850603126 podStartE2EDuration="6.850603126s" podCreationTimestamp="2026-03-13 10:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:57:03.814825613 +0000 UTC m=+7.290778962" watchObservedRunningTime="2026-03-13 10:57:03.850603126 +0000 UTC m=+7.326556475" Mar 13 10:57:03.892378 master-0 kubenswrapper[33013]: I0313 10:57:03.892283 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:57:03.894420 master-0 kubenswrapper[33013]: I0313 10:57:03.894362 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-96vwf" Mar 13 10:57:03.950953 master-0 kubenswrapper[33013]: I0313 10:57:03.950767 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=6.950747261 podStartE2EDuration="6.950747261s" podCreationTimestamp="2026-03-13 10:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:57:03.851317336 +0000 UTC m=+7.327270685" watchObservedRunningTime="2026-03-13 10:57:03.950747261 +0000 UTC m=+7.426700610" Mar 13 10:57:04.001107 master-0 kubenswrapper[33013]: I0313 10:57:04.001045 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:57:04.001340 master-0 kubenswrapper[33013]: E0313 10:57:04.001271 33013 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:04.001340 master-0 kubenswrapper[33013]: E0313 10:57:04.001312 33013 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:04.001430 master-0 kubenswrapper[33013]: E0313 10:57:04.001387 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access podName:533638d2-44ce-4cf8-aa47-a6b89c94621d nodeName:}" failed. No retries permitted until 2026-03-13 10:57:06.001362509 +0000 UTC m=+9.477315878 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access") pod "installer-3-master-0" (UID: "533638d2-44ce-4cf8-aa47-a6b89c94621d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:04.106687 master-0 kubenswrapper[33013]: I0313 10:57:04.106621 33013 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:57:04.106687 master-0 kubenswrapper[33013]: I0313 10:57:04.106666 33013 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:57:04.110961 master-0 kubenswrapper[33013]: I0313 10:57:04.110878 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:57:04.497061 master-0 kubenswrapper[33013]: I0313 10:57:04.496971 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:57:04.497498 master-0 kubenswrapper[33013]: I0313 10:57:04.497464 33013 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:57:04.500533 master-0 kubenswrapper[33013]: I0313 10:57:04.500492 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 10:57:04.565310 master-0 kubenswrapper[33013]: I0313 10:57:04.565234 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:57:04.568175 master-0 kubenswrapper[33013]: I0313 10:57:04.567842 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7b564dfc5b-qc9cq" Mar 13 10:57:05.663156 master-0 kubenswrapper[33013]: I0313 10:57:05.663068 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:57:05.667852 master-0 kubenswrapper[33013]: I0313 10:57:05.667823 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:57:05.702891 master-0 kubenswrapper[33013]: I0313 10:57:05.702852 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:57:05.706454 master-0 kubenswrapper[33013]: I0313 10:57:05.706424 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-f46qd" Mar 13 10:57:05.748529 master-0 kubenswrapper[33013]: I0313 10:57:05.748479 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:57:05.758313 master-0 kubenswrapper[33013]: I0313 10:57:05.758279 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-rsl2h" Mar 13 10:57:06.035355 master-0 kubenswrapper[33013]: I0313 10:57:06.035199 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:57:06.035355 master-0 kubenswrapper[33013]: E0313 10:57:06.035343 33013 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:06.035633 master-0 kubenswrapper[33013]: E0313 10:57:06.035368 33013 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:06.035633 master-0 kubenswrapper[33013]: E0313 10:57:06.035423 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access podName:533638d2-44ce-4cf8-aa47-a6b89c94621d nodeName:}" failed. No retries permitted until 2026-03-13 10:57:10.035406928 +0000 UTC m=+13.511360277 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access") pod "installer-3-master-0" (UID: "533638d2-44ce-4cf8-aa47-a6b89c94621d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:06.123745 master-0 kubenswrapper[33013]: I0313 10:57:06.123692 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:57:06.576599 master-0 kubenswrapper[33013]: I0313 10:57:06.576508 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 13 10:57:06.588497 master-0 kubenswrapper[33013]: I0313 10:57:06.588454 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 13 10:57:06.680620 master-0 kubenswrapper[33013]: I0313 10:57:06.680532 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:57:06.695888 master-0 kubenswrapper[33013]: I0313 10:57:06.695833 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:57:06.774671 master-0 kubenswrapper[33013]: I0313 10:57:06.774589 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bgvrc" Mar 13 10:57:06.940435 master-0 kubenswrapper[33013]: I0313 10:57:06.940295 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:57:06.944154 master-0 kubenswrapper[33013]: I0313 10:57:06.944081 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-85x6d" Mar 13 10:57:07.136144 master-0 kubenswrapper[33013]: I0313 10:57:07.136053 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 13 10:57:07.221639 master-0 kubenswrapper[33013]: I0313 10:57:07.221495 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-65bc99cdf7-7rjbr" Mar 13 10:57:07.282173 master-0 kubenswrapper[33013]: I0313 10:57:07.282109 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:57:07.294592 master-0 kubenswrapper[33013]: I0313 10:57:07.293532 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:57:07.295084 master-0 kubenswrapper[33013]: I0313 10:57:07.295044 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-2j5jl" Mar 13 10:57:07.297496 master-0 kubenswrapper[33013]: I0313 10:57:07.295379 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-bg6zf" Mar 13 10:57:07.515869 master-0 kubenswrapper[33013]: I0313 10:57:07.515760 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-778fb45b4-65f7b" Mar 13 10:57:07.622658 master-0 kubenswrapper[33013]: I0313 10:57:07.620562 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:57:07.643663 master-0 kubenswrapper[33013]: I0313 10:57:07.637916 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:57:09.046593 master-0 kubenswrapper[33013]: I0313 10:57:09.046531 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:57:09.099699 master-0 kubenswrapper[33013]: I0313 10:57:09.099645 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jdzpd" Mar 13 10:57:09.300920 master-0 kubenswrapper[33013]: I0313 10:57:09.300783 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:57:09.375397 master-0 kubenswrapper[33013]: I0313 10:57:09.375335 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:57:09.378945 master-0 kubenswrapper[33013]: I0313 10:57:09.378908 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:57:10.095125 master-0 kubenswrapper[33013]: I0313 10:57:10.095040 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:57:10.095713 master-0 kubenswrapper[33013]: E0313 10:57:10.095258 33013 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:10.095713 master-0 kubenswrapper[33013]: E0313 10:57:10.095296 33013 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:10.095713 master-0 kubenswrapper[33013]: E0313 10:57:10.095361 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access podName:533638d2-44ce-4cf8-aa47-a6b89c94621d nodeName:}" failed. No retries permitted until 2026-03-13 10:57:18.095341826 +0000 UTC m=+21.571295175 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access") pod "installer-3-master-0" (UID: "533638d2-44ce-4cf8-aa47-a6b89c94621d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:10.099544 master-0 kubenswrapper[33013]: I0313 10:57:10.099509 33013 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 10:57:10.099791 master-0 kubenswrapper[33013]: I0313 10:57:10.099759 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" containerID="cri-o://a6907ade1777d6a7c993aeb23acaeb6fdd891b625a9b035210953700ede72f63" gracePeriod=5 Mar 13 10:57:10.255079 master-0 kubenswrapper[33013]: I0313 10:57:10.255015 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:57:10.255355 master-0 kubenswrapper[33013]: I0313 10:57:10.255215 33013 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:57:10.255355 master-0 kubenswrapper[33013]: I0313 10:57:10.255231 33013 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:57:10.293081 master-0 kubenswrapper[33013]: I0313 10:57:10.293034 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:57:10.626659 master-0 kubenswrapper[33013]: I0313 10:57:10.626605 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:57:10.772759 master-0 kubenswrapper[33013]: I0313 10:57:10.772703 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:57:10.772759 master-0 kubenswrapper[33013]: I0313 10:57:10.772770 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:57:10.773064 master-0 kubenswrapper[33013]: I0313 10:57:10.772792 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-d5b45" Mar 13 10:57:10.954427 master-0 kubenswrapper[33013]: I0313 10:57:10.954305 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:57:11.002732 master-0 kubenswrapper[33013]: I0313 10:57:11.002649 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vr4ts" Mar 13 10:57:11.147549 master-0 kubenswrapper[33013]: I0313 10:57:11.147484 33013 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:57:14.738069 master-0 kubenswrapper[33013]: I0313 10:57:14.738023 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:57:15.196647 master-0 kubenswrapper[33013]: I0313 10:57:15.195880 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 13 10:57:15.196647 master-0 kubenswrapper[33013]: I0313 10:57:15.195944 33013 generic.go:334] "Generic (PLEG): container finished" podID="899242a15b2bdf3b4a04fb323647ca94" containerID="a6907ade1777d6a7c993aeb23acaeb6fdd891b625a9b035210953700ede72f63" exitCode=137 Mar 13 10:57:15.667725 master-0 kubenswrapper[33013]: I0313 10:57:15.667682 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 13 10:57:15.667922 master-0 kubenswrapper[33013]: I0313 10:57:15.667767 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:57:15.774941 master-0 kubenswrapper[33013]: I0313 10:57:15.774887 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 10:57:15.774941 master-0 kubenswrapper[33013]: I0313 10:57:15.774952 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 10:57:15.775550 master-0 kubenswrapper[33013]: I0313 10:57:15.774971 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 10:57:15.775550 master-0 kubenswrapper[33013]: I0313 10:57:15.775007 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 10:57:15.775550 master-0 kubenswrapper[33013]: I0313 10:57:15.775031 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 10:57:15.776008 master-0 kubenswrapper[33013]: I0313 10:57:15.775978 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock" (OuterVolumeSpecName: "var-lock") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:57:15.776008 master-0 kubenswrapper[33013]: I0313 10:57:15.775994 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log" (OuterVolumeSpecName: "var-log") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:57:15.776099 master-0 kubenswrapper[33013]: I0313 10:57:15.776021 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests" (OuterVolumeSpecName: "manifests") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:57:15.776099 master-0 kubenswrapper[33013]: I0313 10:57:15.776030 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:57:15.780692 master-0 kubenswrapper[33013]: I0313 10:57:15.780659 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:57:15.876721 master-0 kubenswrapper[33013]: I0313 10:57:15.876556 33013 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") on node \"master-0\" DevicePath \"\"" Mar 13 10:57:15.876721 master-0 kubenswrapper[33013]: I0313 10:57:15.876625 33013 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:57:15.876721 master-0 kubenswrapper[33013]: I0313 10:57:15.876638 33013 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") on node \"master-0\" DevicePath \"\"" Mar 13 10:57:15.876721 master-0 kubenswrapper[33013]: I0313 10:57:15.876649 33013 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:57:15.876721 master-0 kubenswrapper[33013]: I0313 10:57:15.876663 33013 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:57:16.203181 master-0 kubenswrapper[33013]: I0313 10:57:16.203035 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 13 10:57:16.203181 master-0 kubenswrapper[33013]: I0313 10:57:16.203137 33013 scope.go:117] "RemoveContainer" containerID="a6907ade1777d6a7c993aeb23acaeb6fdd891b625a9b035210953700ede72f63" Mar 13 10:57:16.203636 master-0 kubenswrapper[33013]: I0313 10:57:16.203249 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:57:16.242338 master-0 kubenswrapper[33013]: I0313 10:57:16.242287 33013 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="a700fa84-887b-4579-8f09-e021694004fd" Mar 13 10:57:16.719739 master-0 kubenswrapper[33013]: I0313 10:57:16.719689 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899242a15b2bdf3b4a04fb323647ca94" path="/var/lib/kubelet/pods/899242a15b2bdf3b4a04fb323647ca94/volumes" Mar 13 10:57:16.719980 master-0 kubenswrapper[33013]: I0313 10:57:16.719962 33013 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 13 10:57:16.733787 master-0 kubenswrapper[33013]: I0313 10:57:16.733714 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 10:57:16.733787 master-0 kubenswrapper[33013]: I0313 10:57:16.733765 33013 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="a700fa84-887b-4579-8f09-e021694004fd" Mar 13 10:57:16.734025 master-0 kubenswrapper[33013]: I0313 10:57:16.733881 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:57:16.736180 master-0 kubenswrapper[33013]: I0313 10:57:16.736132 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 10:57:16.736180 master-0 kubenswrapper[33013]: I0313 10:57:16.736173 33013 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="a700fa84-887b-4579-8f09-e021694004fd" Mar 13 10:57:16.792176 master-0 kubenswrapper[33013]: I0313 10:57:16.792119 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mrztj" Mar 13 10:57:18.109254 master-0 kubenswrapper[33013]: I0313 10:57:18.109172 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:57:18.109902 master-0 kubenswrapper[33013]: E0313 10:57:18.109353 33013 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:18.109902 master-0 kubenswrapper[33013]: E0313 10:57:18.109403 33013 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:18.109902 master-0 kubenswrapper[33013]: E0313 10:57:18.109468 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access podName:533638d2-44ce-4cf8-aa47-a6b89c94621d nodeName:}" failed. No retries permitted until 2026-03-13 10:57:34.1094497 +0000 UTC m=+37.585403049 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access") pod "installer-3-master-0" (UID: "533638d2-44ce-4cf8-aa47-a6b89c94621d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:19.957029 master-0 kubenswrapper[33013]: I0313 10:57:19.956962 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:57:19.957770 master-0 kubenswrapper[33013]: I0313 10:57:19.957121 33013 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 10:57:19.980370 master-0 kubenswrapper[33013]: I0313 10:57:19.980317 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hztqp" Mar 13 10:57:20.175996 master-0 kubenswrapper[33013]: I0313 10:57:20.175913 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:57:22.398595 master-0 kubenswrapper[33013]: I0313 10:57:22.398517 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-wbd5j"] Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: E0313 10:57:22.398863 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb7b798-15b5-4004-87d0-96ce9381cdbe" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: I0313 10:57:22.398882 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb7b798-15b5-4004-87d0-96ce9381cdbe" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: E0313 10:57:22.398926 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1769d48d-7ef0-48ee-9b7d-b46151ae5df6" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: I0313 10:57:22.398934 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="1769d48d-7ef0-48ee-9b7d-b46151ae5df6" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: E0313 10:57:22.398954 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55a2a95-178c-4fcd-9866-3a149948d1d3" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: I0313 10:57:22.398963 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55a2a95-178c-4fcd-9866-3a149948d1d3" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: E0313 10:57:22.398984 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d0a863-e526-43af-81e7-427336d845b0" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: I0313 10:57:22.398992 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d0a863-e526-43af-81e7-427336d845b0" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: E0313 10:57:22.399009 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8bdd05f-f920-4441-969f-336c85d2da57" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: I0313 10:57:22.399017 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8bdd05f-f920-4441-969f-336c85d2da57" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: E0313 10:57:22.399035 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: I0313 10:57:22.399045 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: E0313 10:57:22.399057 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f7830b-51cc-45d2-bbb3-ac01eeed57ac" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: I0313 10:57:22.399066 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f7830b-51cc-45d2-bbb3-ac01eeed57ac" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: E0313 10:57:22.399077 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00e8e251-40d9-458a-92a7-9b2e91dc7359" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: I0313 10:57:22.399085 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="00e8e251-40d9-458a-92a7-9b2e91dc7359" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: E0313 10:57:22.399106 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e06733a-9c47-4bcf-a5e2-946db8e2714b" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: I0313 10:57:22.399114 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e06733a-9c47-4bcf-a5e2-946db8e2714b" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: E0313 10:57:22.399137 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: I0313 10:57:22.399145 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: E0313 10:57:22.399165 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8337424-8677-401d-8c68-b58c7d9ab99a" containerName="assisted-installer-controller" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: I0313 10:57:22.399173 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8337424-8677-401d-8c68-b58c7d9ab99a" containerName="assisted-installer-controller" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: E0313 10:57:22.399191 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7baf3efc-04dc-4c17-9c2a-397ac022d281" containerName="installer" Mar 13 10:57:22.399188 master-0 kubenswrapper[33013]: I0313 10:57:22.399199 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="7baf3efc-04dc-4c17-9c2a-397ac022d281" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: E0313 10:57:22.399229 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399238 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: E0313 10:57:22.399258 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533638d2-44ce-4cf8-aa47-a6b89c94621d" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399266 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="533638d2-44ce-4cf8-aa47-a6b89c94621d" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399413 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb7b798-15b5-4004-87d0-96ce9381cdbe" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399471 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f7830b-51cc-45d2-bbb3-ac01eeed57ac" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399486 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e06733a-9c47-4bcf-a5e2-946db8e2714b" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399497 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a41bd7-f3fe-4c5b-88fd-ddbbebcb440c" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399512 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8337424-8677-401d-8c68-b58c7d9ab99a" containerName="assisted-installer-controller" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399546 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="1769d48d-7ef0-48ee-9b7d-b46151ae5df6" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399566 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="00e8e251-40d9-458a-92a7-9b2e91dc7359" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399608 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="533638d2-44ce-4cf8-aa47-a6b89c94621d" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399639 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8bdd05f-f920-4441-969f-336c85d2da57" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399654 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="a55a2a95-178c-4fcd-9866-3a149948d1d3" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399672 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="7baf3efc-04dc-4c17-9c2a-397ac022d281" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399688 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c87cc51-5c07-4ac7-b5ac-ce56b320ce1c" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399705 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d0a863-e526-43af-81e7-427336d845b0" containerName="installer" Mar 13 10:57:22.400102 master-0 kubenswrapper[33013]: I0313 10:57:22.399719 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 13 10:57:22.400615 master-0 kubenswrapper[33013]: I0313 10:57:22.400270 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:22.406177 master-0 kubenswrapper[33013]: I0313 10:57:22.406141 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 13 10:57:22.406885 master-0 kubenswrapper[33013]: I0313 10:57:22.406869 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 13 10:57:22.407032 master-0 kubenswrapper[33013]: I0313 10:57:22.406989 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-jjg66" Mar 13 10:57:22.407202 master-0 kubenswrapper[33013]: I0313 10:57:22.406905 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 13 10:57:22.408861 master-0 kubenswrapper[33013]: I0313 10:57:22.408846 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 13 10:57:22.429016 master-0 kubenswrapper[33013]: I0313 10:57:22.427053 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-wbd5j"] Mar 13 10:57:22.430200 master-0 kubenswrapper[33013]: I0313 10:57:22.430158 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 13 10:57:22.477466 master-0 kubenswrapper[33013]: I0313 10:57:22.477407 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a61490fa-360e-42fe-b74b-7326b45a775a-trusted-ca\") pod \"console-operator-6c7fb6b958-wbd5j\" (UID: \"a61490fa-360e-42fe-b74b-7326b45a775a\") " pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:22.477719 master-0 kubenswrapper[33013]: I0313 10:57:22.477478 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmr4h\" (UniqueName: \"kubernetes.io/projected/a61490fa-360e-42fe-b74b-7326b45a775a-kube-api-access-xmr4h\") pod \"console-operator-6c7fb6b958-wbd5j\" (UID: \"a61490fa-360e-42fe-b74b-7326b45a775a\") " pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:22.477719 master-0 kubenswrapper[33013]: I0313 10:57:22.477520 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a61490fa-360e-42fe-b74b-7326b45a775a-config\") pod \"console-operator-6c7fb6b958-wbd5j\" (UID: \"a61490fa-360e-42fe-b74b-7326b45a775a\") " pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:22.477719 master-0 kubenswrapper[33013]: I0313 10:57:22.477561 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a61490fa-360e-42fe-b74b-7326b45a775a-serving-cert\") pod \"console-operator-6c7fb6b958-wbd5j\" (UID: \"a61490fa-360e-42fe-b74b-7326b45a775a\") " pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:22.579032 master-0 kubenswrapper[33013]: I0313 10:57:22.578938 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a61490fa-360e-42fe-b74b-7326b45a775a-config\") pod \"console-operator-6c7fb6b958-wbd5j\" (UID: \"a61490fa-360e-42fe-b74b-7326b45a775a\") " pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:22.579032 master-0 kubenswrapper[33013]: I0313 10:57:22.579008 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a61490fa-360e-42fe-b74b-7326b45a775a-serving-cert\") pod \"console-operator-6c7fb6b958-wbd5j\" (UID: \"a61490fa-360e-42fe-b74b-7326b45a775a\") " pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:22.579470 master-0 kubenswrapper[33013]: I0313 10:57:22.579082 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a61490fa-360e-42fe-b74b-7326b45a775a-trusted-ca\") pod \"console-operator-6c7fb6b958-wbd5j\" (UID: \"a61490fa-360e-42fe-b74b-7326b45a775a\") " pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:22.579765 master-0 kubenswrapper[33013]: I0313 10:57:22.579108 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmr4h\" (UniqueName: \"kubernetes.io/projected/a61490fa-360e-42fe-b74b-7326b45a775a-kube-api-access-xmr4h\") pod \"console-operator-6c7fb6b958-wbd5j\" (UID: \"a61490fa-360e-42fe-b74b-7326b45a775a\") " pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:22.580359 master-0 kubenswrapper[33013]: I0313 10:57:22.580316 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a61490fa-360e-42fe-b74b-7326b45a775a-config\") pod \"console-operator-6c7fb6b958-wbd5j\" (UID: \"a61490fa-360e-42fe-b74b-7326b45a775a\") " pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:22.580578 master-0 kubenswrapper[33013]: I0313 10:57:22.580523 33013 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 13 10:57:22.581437 master-0 kubenswrapper[33013]: I0313 10:57:22.581379 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a61490fa-360e-42fe-b74b-7326b45a775a-trusted-ca\") pod \"console-operator-6c7fb6b958-wbd5j\" (UID: \"a61490fa-360e-42fe-b74b-7326b45a775a\") " pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:22.583687 master-0 kubenswrapper[33013]: I0313 10:57:22.583645 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a61490fa-360e-42fe-b74b-7326b45a775a-serving-cert\") pod \"console-operator-6c7fb6b958-wbd5j\" (UID: \"a61490fa-360e-42fe-b74b-7326b45a775a\") " pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:22.597702 master-0 kubenswrapper[33013]: I0313 10:57:22.597620 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmr4h\" (UniqueName: \"kubernetes.io/projected/a61490fa-360e-42fe-b74b-7326b45a775a-kube-api-access-xmr4h\") pod \"console-operator-6c7fb6b958-wbd5j\" (UID: \"a61490fa-360e-42fe-b74b-7326b45a775a\") " pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:22.722949 master-0 kubenswrapper[33013]: I0313 10:57:22.722788 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:23.131434 master-0 kubenswrapper[33013]: I0313 10:57:23.131368 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-wbd5j"] Mar 13 10:57:23.141718 master-0 kubenswrapper[33013]: I0313 10:57:23.141677 33013 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 10:57:23.255044 master-0 kubenswrapper[33013]: I0313 10:57:23.254974 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" event={"ID":"a61490fa-360e-42fe-b74b-7326b45a775a","Type":"ContainerStarted","Data":"cac6265046b8a809f7ccee4bad22a32b94b6ab46cae3e7c72329474bcc81c02e"} Mar 13 10:57:26.278640 master-0 kubenswrapper[33013]: I0313 10:57:26.278577 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" event={"ID":"a61490fa-360e-42fe-b74b-7326b45a775a","Type":"ContainerStarted","Data":"71bf12ce81be5019265e5f7c0a1b21663b832ad24d4a6e2a66131201478bc6b5"} Mar 13 10:57:26.279236 master-0 kubenswrapper[33013]: I0313 10:57:26.278916 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:26.284081 master-0 kubenswrapper[33013]: I0313 10:57:26.284019 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" Mar 13 10:57:26.338860 master-0 kubenswrapper[33013]: I0313 10:57:26.338791 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-6c7fb6b958-wbd5j" podStartSLOduration=1.931022367 podStartE2EDuration="4.338770544s" podCreationTimestamp="2026-03-13 10:57:22 +0000 UTC" firstStartedPulling="2026-03-13 10:57:23.141558462 +0000 UTC m=+26.617511821" lastFinishedPulling="2026-03-13 10:57:25.549306649 +0000 UTC m=+29.025259998" observedRunningTime="2026-03-13 10:57:26.309115093 +0000 UTC m=+29.785068442" watchObservedRunningTime="2026-03-13 10:57:26.338770544 +0000 UTC m=+29.814723893" Mar 13 10:57:26.375284 master-0 kubenswrapper[33013]: I0313 10:57:26.375230 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-84f57b9877-8mcnx"] Mar 13 10:57:26.377235 master-0 kubenswrapper[33013]: I0313 10:57:26.377212 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-8mcnx" Mar 13 10:57:26.381416 master-0 kubenswrapper[33013]: I0313 10:57:26.379934 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-fdstf" Mar 13 10:57:26.381416 master-0 kubenswrapper[33013]: I0313 10:57:26.380222 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 13 10:57:26.381416 master-0 kubenswrapper[33013]: I0313 10:57:26.380282 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 13 10:57:26.398052 master-0 kubenswrapper[33013]: I0313 10:57:26.397993 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-8mcnx"] Mar 13 10:57:26.439735 master-0 kubenswrapper[33013]: I0313 10:57:26.439640 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l6nc\" (UniqueName: \"kubernetes.io/projected/ed326fc5-8ccf-4cee-8ff9-77e7a1112757-kube-api-access-6l6nc\") pod \"downloads-84f57b9877-8mcnx\" (UID: \"ed326fc5-8ccf-4cee-8ff9-77e7a1112757\") " pod="openshift-console/downloads-84f57b9877-8mcnx" Mar 13 10:57:26.541203 master-0 kubenswrapper[33013]: I0313 10:57:26.541046 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l6nc\" (UniqueName: \"kubernetes.io/projected/ed326fc5-8ccf-4cee-8ff9-77e7a1112757-kube-api-access-6l6nc\") pod \"downloads-84f57b9877-8mcnx\" (UID: \"ed326fc5-8ccf-4cee-8ff9-77e7a1112757\") " pod="openshift-console/downloads-84f57b9877-8mcnx" Mar 13 10:57:26.556993 master-0 kubenswrapper[33013]: I0313 10:57:26.556904 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l6nc\" (UniqueName: \"kubernetes.io/projected/ed326fc5-8ccf-4cee-8ff9-77e7a1112757-kube-api-access-6l6nc\") pod \"downloads-84f57b9877-8mcnx\" (UID: \"ed326fc5-8ccf-4cee-8ff9-77e7a1112757\") " pod="openshift-console/downloads-84f57b9877-8mcnx" Mar 13 10:57:26.716017 master-0 kubenswrapper[33013]: I0313 10:57:26.715947 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-8mcnx" Mar 13 10:57:26.870242 master-0 kubenswrapper[33013]: I0313 10:57:26.870140 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-5c546c5888-trv7f"] Mar 13 10:57:26.871763 master-0 kubenswrapper[33013]: I0313 10:57:26.871722 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5c546c5888-trv7f" Mar 13 10:57:26.878414 master-0 kubenswrapper[33013]: I0313 10:57:26.878336 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 13 10:57:26.880463 master-0 kubenswrapper[33013]: I0313 10:57:26.880284 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5c546c5888-trv7f"] Mar 13 10:57:26.880911 master-0 kubenswrapper[33013]: I0313 10:57:26.880678 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-ng7z2" Mar 13 10:57:26.948120 master-0 kubenswrapper[33013]: I0313 10:57:26.947902 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/41eb5719-3691-4034-89f4-5f9d7e7d4d3f-monitoring-plugin-cert\") pod \"monitoring-plugin-5c546c5888-trv7f\" (UID: \"41eb5719-3691-4034-89f4-5f9d7e7d4d3f\") " pod="openshift-monitoring/monitoring-plugin-5c546c5888-trv7f" Mar 13 10:57:27.049600 master-0 kubenswrapper[33013]: I0313 10:57:27.049519 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/41eb5719-3691-4034-89f4-5f9d7e7d4d3f-monitoring-plugin-cert\") pod \"monitoring-plugin-5c546c5888-trv7f\" (UID: \"41eb5719-3691-4034-89f4-5f9d7e7d4d3f\") " pod="openshift-monitoring/monitoring-plugin-5c546c5888-trv7f" Mar 13 10:57:27.052998 master-0 kubenswrapper[33013]: I0313 10:57:27.052955 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/41eb5719-3691-4034-89f4-5f9d7e7d4d3f-monitoring-plugin-cert\") pod \"monitoring-plugin-5c546c5888-trv7f\" (UID: \"41eb5719-3691-4034-89f4-5f9d7e7d4d3f\") " pod="openshift-monitoring/monitoring-plugin-5c546c5888-trv7f" Mar 13 10:57:27.116900 master-0 kubenswrapper[33013]: I0313 10:57:27.116822 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-8mcnx"] Mar 13 10:57:27.192782 master-0 kubenswrapper[33013]: I0313 10:57:27.192694 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5c546c5888-trv7f" Mar 13 10:57:27.292655 master-0 kubenswrapper[33013]: I0313 10:57:27.289578 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-8mcnx" event={"ID":"ed326fc5-8ccf-4cee-8ff9-77e7a1112757","Type":"ContainerStarted","Data":"09759d3b3ad203a08c3b0cb18d2818de99f3db4f81118148f50a70640bd01876"} Mar 13 10:57:27.603177 master-0 kubenswrapper[33013]: I0313 10:57:27.603109 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5c546c5888-trv7f"] Mar 13 10:57:28.305576 master-0 kubenswrapper[33013]: I0313 10:57:28.305510 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5c546c5888-trv7f" event={"ID":"41eb5719-3691-4034-89f4-5f9d7e7d4d3f","Type":"ContainerStarted","Data":"6cef98faef5ee504925bf8b25b55418a65605e1831281556de93d1bde4c2a696"} Mar 13 10:57:29.313454 master-0 kubenswrapper[33013]: I0313 10:57:29.313407 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5c546c5888-trv7f" event={"ID":"41eb5719-3691-4034-89f4-5f9d7e7d4d3f","Type":"ContainerStarted","Data":"933c2daa7928059ffe08829313099aec7b4c0f95ef051a264c936fcc47d72419"} Mar 13 10:57:29.314115 master-0 kubenswrapper[33013]: I0313 10:57:29.314096 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-5c546c5888-trv7f" Mar 13 10:57:29.333763 master-0 kubenswrapper[33013]: I0313 10:57:29.333619 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-5c546c5888-trv7f" Mar 13 10:57:29.335855 master-0 kubenswrapper[33013]: I0313 10:57:29.335811 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-5c546c5888-trv7f" podStartSLOduration=1.891272493 podStartE2EDuration="3.335797177s" podCreationTimestamp="2026-03-13 10:57:26 +0000 UTC" firstStartedPulling="2026-03-13 10:57:27.632369901 +0000 UTC m=+31.108323250" lastFinishedPulling="2026-03-13 10:57:29.076894585 +0000 UTC m=+32.552847934" observedRunningTime="2026-03-13 10:57:29.33412373 +0000 UTC m=+32.810077079" watchObservedRunningTime="2026-03-13 10:57:29.335797177 +0000 UTC m=+32.811750526" Mar 13 10:57:33.979629 master-0 kubenswrapper[33013]: I0313 10:57:33.979451 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-b94667db7-z29mk"] Mar 13 10:57:33.984603 master-0 kubenswrapper[33013]: I0313 10:57:33.980362 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:33.984603 master-0 kubenswrapper[33013]: I0313 10:57:33.983351 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 13 10:57:33.984603 master-0 kubenswrapper[33013]: I0313 10:57:33.983541 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 13 10:57:33.984603 master-0 kubenswrapper[33013]: I0313 10:57:33.983692 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 13 10:57:33.984603 master-0 kubenswrapper[33013]: I0313 10:57:33.983998 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 13 10:57:33.984603 master-0 kubenswrapper[33013]: I0313 10:57:33.984113 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-w2czt" Mar 13 10:57:33.984603 master-0 kubenswrapper[33013]: I0313 10:57:33.984524 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 13 10:57:34.009612 master-0 kubenswrapper[33013]: I0313 10:57:34.006101 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b94667db7-z29mk"] Mar 13 10:57:34.150101 master-0 kubenswrapper[33013]: I0313 10:57:34.150013 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-oauth-serving-cert\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.150458 master-0 kubenswrapper[33013]: I0313 10:57:34.150118 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-config\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.150458 master-0 kubenswrapper[33013]: I0313 10:57:34.150216 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77vqt\" (UniqueName: \"kubernetes.io/projected/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-kube-api-access-77vqt\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.150458 master-0 kubenswrapper[33013]: I0313 10:57:34.150263 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-serving-cert\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.150458 master-0 kubenswrapper[33013]: I0313 10:57:34.150285 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-service-ca\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.150458 master-0 kubenswrapper[33013]: I0313 10:57:34.150371 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-oauth-config\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.150663 master-0 kubenswrapper[33013]: E0313 10:57:34.150618 33013 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:34.150663 master-0 kubenswrapper[33013]: E0313 10:57:34.150638 33013 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:34.150733 master-0 kubenswrapper[33013]: E0313 10:57:34.150683 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access podName:533638d2-44ce-4cf8-aa47-a6b89c94621d nodeName:}" failed. No retries permitted until 2026-03-13 10:58:06.150668724 +0000 UTC m=+69.626622073 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access") pod "installer-3-master-0" (UID: "533638d2-44ce-4cf8-aa47-a6b89c94621d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 10:57:34.151161 master-0 kubenswrapper[33013]: I0313 10:57:34.151124 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:57:34.255180 master-0 kubenswrapper[33013]: I0313 10:57:34.252898 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-serving-cert\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.255180 master-0 kubenswrapper[33013]: I0313 10:57:34.252972 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-service-ca\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.255180 master-0 kubenswrapper[33013]: I0313 10:57:34.253020 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-oauth-config\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.255180 master-0 kubenswrapper[33013]: I0313 10:57:34.253111 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-oauth-serving-cert\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.255180 master-0 kubenswrapper[33013]: I0313 10:57:34.253135 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-config\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.255180 master-0 kubenswrapper[33013]: I0313 10:57:34.253165 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77vqt\" (UniqueName: \"kubernetes.io/projected/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-kube-api-access-77vqt\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.255904 master-0 kubenswrapper[33013]: I0313 10:57:34.255773 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-service-ca\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.255992 master-0 kubenswrapper[33013]: I0313 10:57:34.255907 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-config\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.256190 master-0 kubenswrapper[33013]: I0313 10:57:34.256117 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-oauth-serving-cert\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.257350 master-0 kubenswrapper[33013]: I0313 10:57:34.257323 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-oauth-config\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.258325 master-0 kubenswrapper[33013]: I0313 10:57:34.258293 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-serving-cert\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.290170 master-0 kubenswrapper[33013]: I0313 10:57:34.290102 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77vqt\" (UniqueName: \"kubernetes.io/projected/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-kube-api-access-77vqt\") pod \"console-b94667db7-z29mk\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.312164 master-0 kubenswrapper[33013]: I0313 10:57:34.312081 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:34.746117 master-0 kubenswrapper[33013]: I0313 10:57:34.744611 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:57:34.749790 master-0 kubenswrapper[33013]: I0313 10:57:34.749383 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 10:57:34.809013 master-0 kubenswrapper[33013]: I0313 10:57:34.808620 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b94667db7-z29mk"] Mar 13 10:57:35.770253 master-0 kubenswrapper[33013]: I0313 10:57:35.770169 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b94667db7-z29mk" event={"ID":"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0","Type":"ContainerStarted","Data":"1f7d2e8e2c19744fd3f8a62cfb3d5ed3ccfa3196383dd67998d63322c2273657"} Mar 13 10:57:39.424147 master-0 kubenswrapper[33013]: I0313 10:57:39.424075 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-66b864759f-6clbz"] Mar 13 10:57:39.425222 master-0 kubenswrapper[33013]: I0313 10:57:39.425005 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.436941 master-0 kubenswrapper[33013]: I0313 10:57:39.436896 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 13 10:57:39.453122 master-0 kubenswrapper[33013]: I0313 10:57:39.446663 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66b864759f-6clbz"] Mar 13 10:57:39.466657 master-0 kubenswrapper[33013]: I0313 10:57:39.465056 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-oauth-serving-cert\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.466657 master-0 kubenswrapper[33013]: I0313 10:57:39.465106 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aca8f47b-7610-492c-bf79-a7e598b07054-console-serving-cert\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.466657 master-0 kubenswrapper[33013]: I0313 10:57:39.465137 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-console-config\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.466657 master-0 kubenswrapper[33013]: I0313 10:57:39.465156 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26zj2\" (UniqueName: \"kubernetes.io/projected/aca8f47b-7610-492c-bf79-a7e598b07054-kube-api-access-26zj2\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.466657 master-0 kubenswrapper[33013]: I0313 10:57:39.465201 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aca8f47b-7610-492c-bf79-a7e598b07054-console-oauth-config\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.466657 master-0 kubenswrapper[33013]: I0313 10:57:39.465543 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-trusted-ca-bundle\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.466657 master-0 kubenswrapper[33013]: I0313 10:57:39.465632 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-service-ca\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.567413 master-0 kubenswrapper[33013]: I0313 10:57:39.567348 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aca8f47b-7610-492c-bf79-a7e598b07054-console-oauth-config\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.567728 master-0 kubenswrapper[33013]: I0313 10:57:39.567463 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-trusted-ca-bundle\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.567728 master-0 kubenswrapper[33013]: I0313 10:57:39.567490 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-service-ca\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.567728 master-0 kubenswrapper[33013]: I0313 10:57:39.567521 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-oauth-serving-cert\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.567728 master-0 kubenswrapper[33013]: I0313 10:57:39.567547 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aca8f47b-7610-492c-bf79-a7e598b07054-console-serving-cert\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.567728 master-0 kubenswrapper[33013]: I0313 10:57:39.567576 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-console-config\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.567728 master-0 kubenswrapper[33013]: I0313 10:57:39.567612 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26zj2\" (UniqueName: \"kubernetes.io/projected/aca8f47b-7610-492c-bf79-a7e598b07054-kube-api-access-26zj2\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.569158 master-0 kubenswrapper[33013]: I0313 10:57:39.569124 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-service-ca\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.569963 master-0 kubenswrapper[33013]: I0313 10:57:39.569935 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-oauth-serving-cert\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.570464 master-0 kubenswrapper[33013]: I0313 10:57:39.570428 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-console-config\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.571242 master-0 kubenswrapper[33013]: I0313 10:57:39.571203 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-trusted-ca-bundle\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.573716 master-0 kubenswrapper[33013]: I0313 10:57:39.573662 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aca8f47b-7610-492c-bf79-a7e598b07054-console-oauth-config\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.578441 master-0 kubenswrapper[33013]: I0313 10:57:39.578405 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aca8f47b-7610-492c-bf79-a7e598b07054-console-serving-cert\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.594793 master-0 kubenswrapper[33013]: I0313 10:57:39.594729 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26zj2\" (UniqueName: \"kubernetes.io/projected/aca8f47b-7610-492c-bf79-a7e598b07054-kube-api-access-26zj2\") pod \"console-66b864759f-6clbz\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:39.767339 master-0 kubenswrapper[33013]: I0313 10:57:39.767181 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:40.486569 master-0 kubenswrapper[33013]: I0313 10:57:40.486445 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66b864759f-6clbz"] Mar 13 10:57:40.817297 master-0 kubenswrapper[33013]: I0313 10:57:40.817241 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66b864759f-6clbz" event={"ID":"aca8f47b-7610-492c-bf79-a7e598b07054","Type":"ContainerStarted","Data":"ae5b6ea2a145fdf7f9d35ebde17a54fa1f5cfec8a22e10004fdfdce453640d37"} Mar 13 10:57:40.817297 master-0 kubenswrapper[33013]: I0313 10:57:40.817300 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66b864759f-6clbz" event={"ID":"aca8f47b-7610-492c-bf79-a7e598b07054","Type":"ContainerStarted","Data":"7cf27f8dff55faab5d8c8aff3b41971d21ec5d40698e28b5068e78833a54882a"} Mar 13 10:57:40.820857 master-0 kubenswrapper[33013]: I0313 10:57:40.820818 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b94667db7-z29mk" event={"ID":"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0","Type":"ContainerStarted","Data":"c8ccec5bf5ecabc58f8cd1c8db0bfdac19198b240f9421965683ed2c1532e1a6"} Mar 13 10:57:41.196852 master-0 kubenswrapper[33013]: I0313 10:57:41.196692 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-66b864759f-6clbz" podStartSLOduration=2.196669419 podStartE2EDuration="2.196669419s" podCreationTimestamp="2026-03-13 10:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:57:41.058820468 +0000 UTC m=+44.534773827" watchObservedRunningTime="2026-03-13 10:57:41.196669419 +0000 UTC m=+44.672622768" Mar 13 10:57:42.381619 master-0 kubenswrapper[33013]: I0313 10:57:42.376982 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-b94667db7-z29mk" podStartSLOduration=4.025404643 podStartE2EDuration="9.376961642s" podCreationTimestamp="2026-03-13 10:57:33 +0000 UTC" firstStartedPulling="2026-03-13 10:57:34.819015746 +0000 UTC m=+38.294969095" lastFinishedPulling="2026-03-13 10:57:40.170572735 +0000 UTC m=+43.646526094" observedRunningTime="2026-03-13 10:57:41.195932568 +0000 UTC m=+44.671885917" watchObservedRunningTime="2026-03-13 10:57:42.376961642 +0000 UTC m=+45.852914991" Mar 13 10:57:42.381619 master-0 kubenswrapper[33013]: I0313 10:57:42.377265 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 10:57:42.381619 master-0 kubenswrapper[33013]: I0313 10:57:42.378113 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 10:57:42.387944 master-0 kubenswrapper[33013]: I0313 10:57:42.387145 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-525r2" Mar 13 10:57:42.387944 master-0 kubenswrapper[33013]: I0313 10:57:42.387505 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 10:57:42.404703 master-0 kubenswrapper[33013]: I0313 10:57:42.403308 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 10:57:42.432240 master-0 kubenswrapper[33013]: I0313 10:57:42.432164 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c968c6b5-f045-4f52-80b0-15df67f4eba3-kube-api-access\") pod \"installer-4-master-0\" (UID: \"c968c6b5-f045-4f52-80b0-15df67f4eba3\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 10:57:42.432240 master-0 kubenswrapper[33013]: I0313 10:57:42.432234 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c968c6b5-f045-4f52-80b0-15df67f4eba3-var-lock\") pod \"installer-4-master-0\" (UID: \"c968c6b5-f045-4f52-80b0-15df67f4eba3\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 10:57:42.432535 master-0 kubenswrapper[33013]: I0313 10:57:42.432287 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c968c6b5-f045-4f52-80b0-15df67f4eba3-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"c968c6b5-f045-4f52-80b0-15df67f4eba3\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 10:57:42.533659 master-0 kubenswrapper[33013]: I0313 10:57:42.533568 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c968c6b5-f045-4f52-80b0-15df67f4eba3-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"c968c6b5-f045-4f52-80b0-15df67f4eba3\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 10:57:42.533906 master-0 kubenswrapper[33013]: I0313 10:57:42.533666 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c968c6b5-f045-4f52-80b0-15df67f4eba3-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"c968c6b5-f045-4f52-80b0-15df67f4eba3\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 10:57:42.533906 master-0 kubenswrapper[33013]: I0313 10:57:42.533714 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c968c6b5-f045-4f52-80b0-15df67f4eba3-kube-api-access\") pod \"installer-4-master-0\" (UID: \"c968c6b5-f045-4f52-80b0-15df67f4eba3\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 10:57:42.533906 master-0 kubenswrapper[33013]: I0313 10:57:42.533765 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c968c6b5-f045-4f52-80b0-15df67f4eba3-var-lock\") pod \"installer-4-master-0\" (UID: \"c968c6b5-f045-4f52-80b0-15df67f4eba3\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 10:57:42.533906 master-0 kubenswrapper[33013]: I0313 10:57:42.533851 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c968c6b5-f045-4f52-80b0-15df67f4eba3-var-lock\") pod \"installer-4-master-0\" (UID: \"c968c6b5-f045-4f52-80b0-15df67f4eba3\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 10:57:42.576575 master-0 kubenswrapper[33013]: I0313 10:57:42.576318 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c968c6b5-f045-4f52-80b0-15df67f4eba3-kube-api-access\") pod \"installer-4-master-0\" (UID: \"c968c6b5-f045-4f52-80b0-15df67f4eba3\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 10:57:42.720613 master-0 kubenswrapper[33013]: I0313 10:57:42.720449 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 10:57:43.191361 master-0 kubenswrapper[33013]: I0313 10:57:43.191297 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 10:57:43.201995 master-0 kubenswrapper[33013]: W0313 10:57:43.201928 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc968c6b5_f045_4f52_80b0_15df67f4eba3.slice/crio-387c3c8a99e5607a3749a035af10dff340223674fe9cbbb0567109c09b455982 WatchSource:0}: Error finding container 387c3c8a99e5607a3749a035af10dff340223674fe9cbbb0567109c09b455982: Status 404 returned error can't find the container with id 387c3c8a99e5607a3749a035af10dff340223674fe9cbbb0567109c09b455982 Mar 13 10:57:43.856136 master-0 kubenswrapper[33013]: I0313 10:57:43.855399 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"c968c6b5-f045-4f52-80b0-15df67f4eba3","Type":"ContainerStarted","Data":"20fb1b355eb09d56c57f7282f8707317b516c21df48ff00e5267c55875955f3a"} Mar 13 10:57:43.856136 master-0 kubenswrapper[33013]: I0313 10:57:43.855456 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"c968c6b5-f045-4f52-80b0-15df67f4eba3","Type":"ContainerStarted","Data":"387c3c8a99e5607a3749a035af10dff340223674fe9cbbb0567109c09b455982"} Mar 13 10:57:43.892309 master-0 kubenswrapper[33013]: I0313 10:57:43.892207 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=1.8921808169999998 podStartE2EDuration="1.892180817s" podCreationTimestamp="2026-03-13 10:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:57:43.888058921 +0000 UTC m=+47.364012270" watchObservedRunningTime="2026-03-13 10:57:43.892180817 +0000 UTC m=+47.368134156" Mar 13 10:57:44.313189 master-0 kubenswrapper[33013]: I0313 10:57:44.313136 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:44.313189 master-0 kubenswrapper[33013]: I0313 10:57:44.313191 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:57:44.314738 master-0 kubenswrapper[33013]: I0313 10:57:44.314696 33013 patch_prober.go:28] interesting pod/console-b94667db7-z29mk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 10:57:44.314833 master-0 kubenswrapper[33013]: I0313 10:57:44.314760 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b94667db7-z29mk" podUID="6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 10:57:49.768145 master-0 kubenswrapper[33013]: I0313 10:57:49.768082 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:49.771824 master-0 kubenswrapper[33013]: I0313 10:57:49.768162 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:57:49.771824 master-0 kubenswrapper[33013]: I0313 10:57:49.770651 33013 patch_prober.go:28] interesting pod/console-66b864759f-6clbz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 13 10:57:49.771824 master-0 kubenswrapper[33013]: I0313 10:57:49.770735 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-66b864759f-6clbz" podUID="aca8f47b-7610-492c-bf79-a7e598b07054" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 13 10:57:54.313729 master-0 kubenswrapper[33013]: I0313 10:57:54.313676 33013 patch_prober.go:28] interesting pod/console-b94667db7-z29mk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 10:57:54.314380 master-0 kubenswrapper[33013]: I0313 10:57:54.313743 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b94667db7-z29mk" podUID="6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 10:57:59.768204 master-0 kubenswrapper[33013]: I0313 10:57:59.768069 33013 patch_prober.go:28] interesting pod/console-66b864759f-6clbz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 13 10:57:59.768204 master-0 kubenswrapper[33013]: I0313 10:57:59.768139 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-66b864759f-6clbz" podUID="aca8f47b-7610-492c-bf79-a7e598b07054" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 13 10:58:02.504717 master-0 kubenswrapper[33013]: I0313 10:58:02.504650 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6758ccc497-88c27"] Mar 13 10:58:02.508611 master-0 kubenswrapper[33013]: I0313 10:58:02.506562 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.516627 master-0 kubenswrapper[33013]: I0313 10:58:02.515873 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 13 10:58:02.516627 master-0 kubenswrapper[33013]: I0313 10:58:02.515948 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 13 10:58:02.516627 master-0 kubenswrapper[33013]: I0313 10:58:02.516312 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 13 10:58:02.516627 master-0 kubenswrapper[33013]: I0313 10:58:02.516340 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 13 10:58:02.516627 master-0 kubenswrapper[33013]: I0313 10:58:02.516479 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 13 10:58:02.516627 master-0 kubenswrapper[33013]: I0313 10:58:02.516542 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 13 10:58:02.517012 master-0 kubenswrapper[33013]: I0313 10:58:02.516657 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 13 10:58:02.517012 master-0 kubenswrapper[33013]: I0313 10:58:02.516762 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 13 10:58:02.517012 master-0 kubenswrapper[33013]: I0313 10:58:02.516859 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-4xv4g" Mar 13 10:58:02.517854 master-0 kubenswrapper[33013]: I0313 10:58:02.517554 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 13 10:58:02.517854 master-0 kubenswrapper[33013]: I0313 10:58:02.517799 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 13 10:58:02.517952 master-0 kubenswrapper[33013]: I0313 10:58:02.517929 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 13 10:58:02.551287 master-0 kubenswrapper[33013]: I0313 10:58:02.549532 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6758ccc497-88c27"] Mar 13 10:58:02.551389 master-0 kubenswrapper[33013]: I0313 10:58:02.550888 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 13 10:58:02.623716 master-0 kubenswrapper[33013]: I0313 10:58:02.619558 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-session\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.623716 master-0 kubenswrapper[33013]: I0313 10:58:02.619711 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.623716 master-0 kubenswrapper[33013]: I0313 10:58:02.619775 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.623716 master-0 kubenswrapper[33013]: I0313 10:58:02.619809 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-error\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.623716 master-0 kubenswrapper[33013]: I0313 10:58:02.619888 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-service-ca\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.623716 master-0 kubenswrapper[33013]: I0313 10:58:02.619948 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-audit-dir\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.623716 master-0 kubenswrapper[33013]: I0313 10:58:02.620033 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-audit-policies\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.623716 master-0 kubenswrapper[33013]: I0313 10:58:02.620069 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fhpm\" (UniqueName: \"kubernetes.io/projected/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-kube-api-access-9fhpm\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.623716 master-0 kubenswrapper[33013]: I0313 10:58:02.620150 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-login\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.623716 master-0 kubenswrapper[33013]: I0313 10:58:02.620202 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.623716 master-0 kubenswrapper[33013]: I0313 10:58:02.620283 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.623716 master-0 kubenswrapper[33013]: I0313 10:58:02.620323 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-router-certs\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.623716 master-0 kubenswrapper[33013]: I0313 10:58:02.620373 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.632398 master-0 kubenswrapper[33013]: I0313 10:58:02.632123 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 13 10:58:02.722008 master-0 kubenswrapper[33013]: I0313 10:58:02.721908 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-login\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.722008 master-0 kubenswrapper[33013]: I0313 10:58:02.722006 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.722331 master-0 kubenswrapper[33013]: I0313 10:58:02.722055 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.722331 master-0 kubenswrapper[33013]: I0313 10:58:02.722194 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-router-certs\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.722331 master-0 kubenswrapper[33013]: I0313 10:58:02.722230 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.722331 master-0 kubenswrapper[33013]: I0313 10:58:02.722262 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-session\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.722331 master-0 kubenswrapper[33013]: I0313 10:58:02.722293 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.722720 master-0 kubenswrapper[33013]: I0313 10:58:02.722663 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.722794 master-0 kubenswrapper[33013]: I0313 10:58:02.722730 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-error\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.722845 master-0 kubenswrapper[33013]: I0313 10:58:02.722807 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-service-ca\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.722892 master-0 kubenswrapper[33013]: I0313 10:58:02.722856 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-audit-dir\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.722958 master-0 kubenswrapper[33013]: I0313 10:58:02.722931 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-audit-policies\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.723047 master-0 kubenswrapper[33013]: I0313 10:58:02.722970 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fhpm\" (UniqueName: \"kubernetes.io/projected/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-kube-api-access-9fhpm\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.723755 master-0 kubenswrapper[33013]: I0313 10:58:02.723724 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-audit-dir\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.723971 master-0 kubenswrapper[33013]: I0313 10:58:02.723919 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-service-ca\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.724185 master-0 kubenswrapper[33013]: I0313 10:58:02.724140 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.724836 master-0 kubenswrapper[33013]: I0313 10:58:02.724803 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-audit-policies\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.726670 master-0 kubenswrapper[33013]: I0313 10:58:02.726580 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.728052 master-0 kubenswrapper[33013]: I0313 10:58:02.727871 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-router-certs\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.728052 master-0 kubenswrapper[33013]: I0313 10:58:02.727903 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.728851 master-0 kubenswrapper[33013]: I0313 10:58:02.728806 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-login\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.729673 master-0 kubenswrapper[33013]: I0313 10:58:02.729578 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-session\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.733519 master-0 kubenswrapper[33013]: I0313 10:58:02.733456 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.733610 master-0 kubenswrapper[33013]: I0313 10:58:02.733504 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-error\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.734031 master-0 kubenswrapper[33013]: I0313 10:58:02.733990 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.745047 master-0 kubenswrapper[33013]: I0313 10:58:02.744981 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fhpm\" (UniqueName: \"kubernetes.io/projected/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-kube-api-access-9fhpm\") pod \"oauth-openshift-6758ccc497-88c27\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:02.935134 master-0 kubenswrapper[33013]: I0313 10:58:02.935069 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:04.313900 master-0 kubenswrapper[33013]: I0313 10:58:04.313787 33013 patch_prober.go:28] interesting pod/console-b94667db7-z29mk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 10:58:04.315508 master-0 kubenswrapper[33013]: I0313 10:58:04.313918 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b94667db7-z29mk" podUID="6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 10:58:05.505353 master-0 kubenswrapper[33013]: I0313 10:58:05.505202 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6758ccc497-88c27"] Mar 13 10:58:05.719427 master-0 kubenswrapper[33013]: I0313 10:58:05.719340 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6758ccc497-88c27"] Mar 13 10:58:06.015032 master-0 kubenswrapper[33013]: I0313 10:58:06.014801 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" event={"ID":"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf","Type":"ContainerStarted","Data":"00902a833b73ae1850a1abba57e5aa445610845174b26c8effec37c5fd553e66"} Mar 13 10:58:06.016467 master-0 kubenswrapper[33013]: I0313 10:58:06.016401 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-8mcnx" event={"ID":"ed326fc5-8ccf-4cee-8ff9-77e7a1112757","Type":"ContainerStarted","Data":"adc0455a8511a56d2413797e6b9c0d9767e72e1d5302fbb439d299ff293bee03"} Mar 13 10:58:06.016923 master-0 kubenswrapper[33013]: I0313 10:58:06.016871 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-84f57b9877-8mcnx" Mar 13 10:58:06.019186 master-0 kubenswrapper[33013]: I0313 10:58:06.019146 33013 patch_prober.go:28] interesting pod/downloads-84f57b9877-8mcnx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Mar 13 10:58:06.019326 master-0 kubenswrapper[33013]: I0313 10:58:06.019194 33013 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-8mcnx" podUID="ed326fc5-8ccf-4cee-8ff9-77e7a1112757" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Mar 13 10:58:06.083252 master-0 kubenswrapper[33013]: I0313 10:58:06.083157 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-84f57b9877-8mcnx" podStartSLOduration=1.538645775 podStartE2EDuration="40.083133799s" podCreationTimestamp="2026-03-13 10:57:26 +0000 UTC" firstStartedPulling="2026-03-13 10:57:27.129187735 +0000 UTC m=+30.605141124" lastFinishedPulling="2026-03-13 10:58:05.673675809 +0000 UTC m=+69.149629148" observedRunningTime="2026-03-13 10:58:06.082274225 +0000 UTC m=+69.558227664" watchObservedRunningTime="2026-03-13 10:58:06.083133799 +0000 UTC m=+69.559087148" Mar 13 10:58:06.188386 master-0 kubenswrapper[33013]: I0313 10:58:06.188299 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:58:06.191805 master-0 kubenswrapper[33013]: I0313 10:58:06.191754 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 10:58:06.289914 master-0 kubenswrapper[33013]: I0313 10:58:06.289784 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") pod \"533638d2-44ce-4cf8-aa47-a6b89c94621d\" (UID: \"533638d2-44ce-4cf8-aa47-a6b89c94621d\") " Mar 13 10:58:06.295055 master-0 kubenswrapper[33013]: I0313 10:58:06.294931 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "533638d2-44ce-4cf8-aa47-a6b89c94621d" (UID: "533638d2-44ce-4cf8-aa47-a6b89c94621d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:58:06.392482 master-0 kubenswrapper[33013]: I0313 10:58:06.392314 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533638d2-44ce-4cf8-aa47-a6b89c94621d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:06.716802 master-0 kubenswrapper[33013]: I0313 10:58:06.716651 33013 patch_prober.go:28] interesting pod/downloads-84f57b9877-8mcnx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Mar 13 10:58:06.716802 master-0 kubenswrapper[33013]: I0313 10:58:06.716711 33013 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-84f57b9877-8mcnx" podUID="ed326fc5-8ccf-4cee-8ff9-77e7a1112757" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Mar 13 10:58:06.716802 master-0 kubenswrapper[33013]: I0313 10:58:06.716716 33013 patch_prober.go:28] interesting pod/downloads-84f57b9877-8mcnx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Mar 13 10:58:06.717523 master-0 kubenswrapper[33013]: I0313 10:58:06.716828 33013 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-8mcnx" podUID="ed326fc5-8ccf-4cee-8ff9-77e7a1112757" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Mar 13 10:58:07.024375 master-0 kubenswrapper[33013]: I0313 10:58:07.024226 33013 patch_prober.go:28] interesting pod/downloads-84f57b9877-8mcnx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Mar 13 10:58:07.024375 master-0 kubenswrapper[33013]: I0313 10:58:07.024318 33013 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-8mcnx" podUID="ed326fc5-8ccf-4cee-8ff9-77e7a1112757" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Mar 13 10:58:09.768055 master-0 kubenswrapper[33013]: I0313 10:58:09.767983 33013 patch_prober.go:28] interesting pod/console-66b864759f-6clbz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 13 10:58:09.768055 master-0 kubenswrapper[33013]: I0313 10:58:09.768050 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-66b864759f-6clbz" podUID="aca8f47b-7610-492c-bf79-a7e598b07054" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 13 10:58:14.314343 master-0 kubenswrapper[33013]: I0313 10:58:14.314195 33013 patch_prober.go:28] interesting pod/console-b94667db7-z29mk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 10:58:14.314343 master-0 kubenswrapper[33013]: I0313 10:58:14.314277 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b94667db7-z29mk" podUID="6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 10:58:16.722901 master-0 kubenswrapper[33013]: I0313 10:58:16.722831 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-84f57b9877-8mcnx" Mar 13 10:58:19.768968 master-0 kubenswrapper[33013]: I0313 10:58:19.768850 33013 patch_prober.go:28] interesting pod/console-66b864759f-6clbz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 13 10:58:19.769807 master-0 kubenswrapper[33013]: I0313 10:58:19.769012 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-66b864759f-6clbz" podUID="aca8f47b-7610-492c-bf79-a7e598b07054" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 13 10:58:19.892760 master-0 kubenswrapper[33013]: I0313 10:58:19.892675 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 10:58:19.893142 master-0 kubenswrapper[33013]: I0313 10:58:19.893092 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-0" podUID="c968c6b5-f045-4f52-80b0-15df67f4eba3" containerName="installer" containerID="cri-o://20fb1b355eb09d56c57f7282f8707317b516c21df48ff00e5267c55875955f3a" gracePeriod=30 Mar 13 10:58:20.114970 master-0 kubenswrapper[33013]: I0313 10:58:20.114907 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" event={"ID":"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf","Type":"ContainerStarted","Data":"48c965cc632a0a17ec00213143fda2c334316e2d7519bf2bd16e390d52d01130"} Mar 13 10:58:20.115310 master-0 kubenswrapper[33013]: I0313 10:58:20.115253 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:20.120934 master-0 kubenswrapper[33013]: I0313 10:58:20.120895 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:20.130625 master-0 kubenswrapper[33013]: I0313 10:58:20.130459 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-867876d6b6-tpq67"] Mar 13 10:58:20.135380 master-0 kubenswrapper[33013]: I0313 10:58:20.134799 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" podUID="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" containerName="controller-manager" containerID="cri-o://f725fe2be0bf9af36703aa7d3255f2363495d32e85e10bf8735ca5097115d77e" gracePeriod=30 Mar 13 10:58:20.156799 master-0 kubenswrapper[33013]: I0313 10:58:20.156697 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" podStartSLOduration=5.029874728 podStartE2EDuration="18.15657907s" podCreationTimestamp="2026-03-13 10:58:02 +0000 UTC" firstStartedPulling="2026-03-13 10:58:05.741058486 +0000 UTC m=+69.217011825" lastFinishedPulling="2026-03-13 10:58:18.867762818 +0000 UTC m=+82.343716167" observedRunningTime="2026-03-13 10:58:20.153849264 +0000 UTC m=+83.629802623" watchObservedRunningTime="2026-03-13 10:58:20.15657907 +0000 UTC m=+83.632532419" Mar 13 10:58:20.171518 master-0 kubenswrapper[33013]: I0313 10:58:20.170485 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl"] Mar 13 10:58:20.171518 master-0 kubenswrapper[33013]: I0313 10:58:20.170959 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" podUID="c09f42db-e6d7-469d-9761-88a879f6aa6b" containerName="route-controller-manager" containerID="cri-o://dfbaca94957b1c7c9e9fa4fa8737e0b8700b98537257590b3b241ae68fa32dd8" gracePeriod=30 Mar 13 10:58:20.702255 master-0 kubenswrapper[33013]: I0313 10:58:20.702200 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:58:20.731527 master-0 kubenswrapper[33013]: I0313 10:58:20.731454 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca\") pod \"c09f42db-e6d7-469d-9761-88a879f6aa6b\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " Mar 13 10:58:20.731527 master-0 kubenswrapper[33013]: I0313 10:58:20.731516 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config\") pod \"c09f42db-e6d7-469d-9761-88a879f6aa6b\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " Mar 13 10:58:20.731527 master-0 kubenswrapper[33013]: I0313 10:58:20.731539 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcb99\" (UniqueName: \"kubernetes.io/projected/c09f42db-e6d7-469d-9761-88a879f6aa6b-kube-api-access-mcb99\") pod \"c09f42db-e6d7-469d-9761-88a879f6aa6b\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " Mar 13 10:58:20.731527 master-0 kubenswrapper[33013]: I0313 10:58:20.731560 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert\") pod \"c09f42db-e6d7-469d-9761-88a879f6aa6b\" (UID: \"c09f42db-e6d7-469d-9761-88a879f6aa6b\") " Mar 13 10:58:20.732680 master-0 kubenswrapper[33013]: I0313 10:58:20.732635 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca" (OuterVolumeSpecName: "client-ca") pod "c09f42db-e6d7-469d-9761-88a879f6aa6b" (UID: "c09f42db-e6d7-469d-9761-88a879f6aa6b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:58:20.733109 master-0 kubenswrapper[33013]: I0313 10:58:20.733070 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config" (OuterVolumeSpecName: "config") pod "c09f42db-e6d7-469d-9761-88a879f6aa6b" (UID: "c09f42db-e6d7-469d-9761-88a879f6aa6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:58:20.739479 master-0 kubenswrapper[33013]: I0313 10:58:20.739413 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c09f42db-e6d7-469d-9761-88a879f6aa6b" (UID: "c09f42db-e6d7-469d-9761-88a879f6aa6b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:58:20.739738 master-0 kubenswrapper[33013]: I0313 10:58:20.739477 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09f42db-e6d7-469d-9761-88a879f6aa6b-kube-api-access-mcb99" (OuterVolumeSpecName: "kube-api-access-mcb99") pod "c09f42db-e6d7-469d-9761-88a879f6aa6b" (UID: "c09f42db-e6d7-469d-9761-88a879f6aa6b"). InnerVolumeSpecName "kube-api-access-mcb99". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:58:20.850628 master-0 kubenswrapper[33013]: I0313 10:58:20.846625 33013 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:20.850628 master-0 kubenswrapper[33013]: I0313 10:58:20.846718 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09f42db-e6d7-469d-9761-88a879f6aa6b-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:20.850628 master-0 kubenswrapper[33013]: I0313 10:58:20.846731 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcb99\" (UniqueName: \"kubernetes.io/projected/c09f42db-e6d7-469d-9761-88a879f6aa6b-kube-api-access-mcb99\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:20.850628 master-0 kubenswrapper[33013]: I0313 10:58:20.846741 33013 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c09f42db-e6d7-469d-9761-88a879f6aa6b-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:20.920706 master-0 kubenswrapper[33013]: I0313 10:58:20.920660 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_c968c6b5-f045-4f52-80b0-15df67f4eba3/installer/0.log" Mar 13 10:58:20.920916 master-0 kubenswrapper[33013]: I0313 10:58:20.920729 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 10:58:20.948088 master-0 kubenswrapper[33013]: I0313 10:58:20.947990 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c968c6b5-f045-4f52-80b0-15df67f4eba3-kubelet-dir\") pod \"c968c6b5-f045-4f52-80b0-15df67f4eba3\" (UID: \"c968c6b5-f045-4f52-80b0-15df67f4eba3\") " Mar 13 10:58:20.948088 master-0 kubenswrapper[33013]: I0313 10:58:20.948064 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c968c6b5-f045-4f52-80b0-15df67f4eba3-kube-api-access\") pod \"c968c6b5-f045-4f52-80b0-15df67f4eba3\" (UID: \"c968c6b5-f045-4f52-80b0-15df67f4eba3\") " Mar 13 10:58:20.948256 master-0 kubenswrapper[33013]: I0313 10:58:20.948119 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c968c6b5-f045-4f52-80b0-15df67f4eba3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c968c6b5-f045-4f52-80b0-15df67f4eba3" (UID: "c968c6b5-f045-4f52-80b0-15df67f4eba3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:58:20.948256 master-0 kubenswrapper[33013]: I0313 10:58:20.948218 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c968c6b5-f045-4f52-80b0-15df67f4eba3-var-lock\") pod \"c968c6b5-f045-4f52-80b0-15df67f4eba3\" (UID: \"c968c6b5-f045-4f52-80b0-15df67f4eba3\") " Mar 13 10:58:20.948605 master-0 kubenswrapper[33013]: I0313 10:58:20.948549 33013 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c968c6b5-f045-4f52-80b0-15df67f4eba3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:20.948669 master-0 kubenswrapper[33013]: I0313 10:58:20.948652 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c968c6b5-f045-4f52-80b0-15df67f4eba3-var-lock" (OuterVolumeSpecName: "var-lock") pod "c968c6b5-f045-4f52-80b0-15df67f4eba3" (UID: "c968c6b5-f045-4f52-80b0-15df67f4eba3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:58:20.951817 master-0 kubenswrapper[33013]: I0313 10:58:20.951773 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c968c6b5-f045-4f52-80b0-15df67f4eba3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c968c6b5-f045-4f52-80b0-15df67f4eba3" (UID: "c968c6b5-f045-4f52-80b0-15df67f4eba3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:58:20.963197 master-0 kubenswrapper[33013]: I0313 10:58:20.963144 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:58:21.049454 master-0 kubenswrapper[33013]: I0313 10:58:21.049367 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config\") pod \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " Mar 13 10:58:21.049728 master-0 kubenswrapper[33013]: I0313 10:58:21.049466 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles\") pod \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " Mar 13 10:58:21.049728 master-0 kubenswrapper[33013]: I0313 10:58:21.049508 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5656\" (UniqueName: \"kubernetes.io/projected/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-kube-api-access-f5656\") pod \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " Mar 13 10:58:21.049728 master-0 kubenswrapper[33013]: I0313 10:58:21.049577 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca\") pod \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " Mar 13 10:58:21.049728 master-0 kubenswrapper[33013]: I0313 10:58:21.049664 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert\") pod \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\" (UID: \"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc\") " Mar 13 10:58:21.050088 master-0 kubenswrapper[33013]: I0313 10:58:21.050059 33013 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c968c6b5-f045-4f52-80b0-15df67f4eba3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:21.050088 master-0 kubenswrapper[33013]: I0313 10:58:21.050085 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c968c6b5-f045-4f52-80b0-15df67f4eba3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:21.050436 master-0 kubenswrapper[33013]: I0313 10:58:21.050406 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" (UID: "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:58:21.050476 master-0 kubenswrapper[33013]: I0313 10:58:21.050427 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca" (OuterVolumeSpecName: "client-ca") pod "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" (UID: "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:58:21.050512 master-0 kubenswrapper[33013]: I0313 10:58:21.050445 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config" (OuterVolumeSpecName: "config") pod "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" (UID: "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:58:21.052734 master-0 kubenswrapper[33013]: I0313 10:58:21.052685 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-kube-api-access-f5656" (OuterVolumeSpecName: "kube-api-access-f5656") pod "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" (UID: "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc"). InnerVolumeSpecName "kube-api-access-f5656". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:58:21.053286 master-0 kubenswrapper[33013]: I0313 10:58:21.053244 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" (UID: "a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:58:21.122115 master-0 kubenswrapper[33013]: I0313 10:58:21.122068 33013 generic.go:334] "Generic (PLEG): container finished" podID="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" containerID="f725fe2be0bf9af36703aa7d3255f2363495d32e85e10bf8735ca5097115d77e" exitCode=0 Mar 13 10:58:21.122334 master-0 kubenswrapper[33013]: I0313 10:58:21.122118 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" Mar 13 10:58:21.122334 master-0 kubenswrapper[33013]: I0313 10:58:21.122176 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" event={"ID":"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc","Type":"ContainerDied","Data":"f725fe2be0bf9af36703aa7d3255f2363495d32e85e10bf8735ca5097115d77e"} Mar 13 10:58:21.122334 master-0 kubenswrapper[33013]: I0313 10:58:21.122241 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867876d6b6-tpq67" event={"ID":"a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc","Type":"ContainerDied","Data":"dd93ec4fe47e71fd21c0051085976706d225fa5cba2fcde1e22ce417bdc6d6e7"} Mar 13 10:58:21.122334 master-0 kubenswrapper[33013]: I0313 10:58:21.122262 33013 scope.go:117] "RemoveContainer" containerID="f725fe2be0bf9af36703aa7d3255f2363495d32e85e10bf8735ca5097115d77e" Mar 13 10:58:21.123963 master-0 kubenswrapper[33013]: I0313 10:58:21.123927 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_c968c6b5-f045-4f52-80b0-15df67f4eba3/installer/0.log" Mar 13 10:58:21.124040 master-0 kubenswrapper[33013]: I0313 10:58:21.123991 33013 generic.go:334] "Generic (PLEG): container finished" podID="c968c6b5-f045-4f52-80b0-15df67f4eba3" containerID="20fb1b355eb09d56c57f7282f8707317b516c21df48ff00e5267c55875955f3a" exitCode=1 Mar 13 10:58:21.124091 master-0 kubenswrapper[33013]: I0313 10:58:21.124058 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"c968c6b5-f045-4f52-80b0-15df67f4eba3","Type":"ContainerDied","Data":"20fb1b355eb09d56c57f7282f8707317b516c21df48ff00e5267c55875955f3a"} Mar 13 10:58:21.124222 master-0 kubenswrapper[33013]: I0313 10:58:21.124088 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"c968c6b5-f045-4f52-80b0-15df67f4eba3","Type":"ContainerDied","Data":"387c3c8a99e5607a3749a035af10dff340223674fe9cbbb0567109c09b455982"} Mar 13 10:58:21.124222 master-0 kubenswrapper[33013]: I0313 10:58:21.124060 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 10:58:21.125914 master-0 kubenswrapper[33013]: I0313 10:58:21.125883 33013 generic.go:334] "Generic (PLEG): container finished" podID="c09f42db-e6d7-469d-9761-88a879f6aa6b" containerID="dfbaca94957b1c7c9e9fa4fa8737e0b8700b98537257590b3b241ae68fa32dd8" exitCode=0 Mar 13 10:58:21.125914 master-0 kubenswrapper[33013]: I0313 10:58:21.125910 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" event={"ID":"c09f42db-e6d7-469d-9761-88a879f6aa6b","Type":"ContainerDied","Data":"dfbaca94957b1c7c9e9fa4fa8737e0b8700b98537257590b3b241ae68fa32dd8"} Mar 13 10:58:21.126011 master-0 kubenswrapper[33013]: I0313 10:58:21.125930 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" event={"ID":"c09f42db-e6d7-469d-9761-88a879f6aa6b","Type":"ContainerDied","Data":"2baa20e270e178f3e40e4ef86226c93b0ff3020bf6dac2cb5d4f63eecde92557"} Mar 13 10:58:21.126011 master-0 kubenswrapper[33013]: I0313 10:58:21.125941 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl" Mar 13 10:58:21.139246 master-0 kubenswrapper[33013]: I0313 10:58:21.139181 33013 scope.go:117] "RemoveContainer" containerID="3e37f1f22df5c284c9d1ba661521c6c1d227be08ffa00372db4208f240cca432" Mar 13 10:58:21.152850 master-0 kubenswrapper[33013]: I0313 10:58:21.152787 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:21.152850 master-0 kubenswrapper[33013]: I0313 10:58:21.152825 33013 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:21.153137 master-0 kubenswrapper[33013]: I0313 10:58:21.152858 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5656\" (UniqueName: \"kubernetes.io/projected/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-kube-api-access-f5656\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:21.153137 master-0 kubenswrapper[33013]: I0313 10:58:21.152868 33013 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:21.153137 master-0 kubenswrapper[33013]: I0313 10:58:21.152876 33013 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:21.162198 master-0 kubenswrapper[33013]: I0313 10:58:21.162098 33013 scope.go:117] "RemoveContainer" containerID="f725fe2be0bf9af36703aa7d3255f2363495d32e85e10bf8735ca5097115d77e" Mar 13 10:58:21.162725 master-0 kubenswrapper[33013]: E0313 10:58:21.162531 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f725fe2be0bf9af36703aa7d3255f2363495d32e85e10bf8735ca5097115d77e\": container with ID starting with f725fe2be0bf9af36703aa7d3255f2363495d32e85e10bf8735ca5097115d77e not found: ID does not exist" containerID="f725fe2be0bf9af36703aa7d3255f2363495d32e85e10bf8735ca5097115d77e" Mar 13 10:58:21.162725 master-0 kubenswrapper[33013]: I0313 10:58:21.162574 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f725fe2be0bf9af36703aa7d3255f2363495d32e85e10bf8735ca5097115d77e"} err="failed to get container status \"f725fe2be0bf9af36703aa7d3255f2363495d32e85e10bf8735ca5097115d77e\": rpc error: code = NotFound desc = could not find container \"f725fe2be0bf9af36703aa7d3255f2363495d32e85e10bf8735ca5097115d77e\": container with ID starting with f725fe2be0bf9af36703aa7d3255f2363495d32e85e10bf8735ca5097115d77e not found: ID does not exist" Mar 13 10:58:21.162725 master-0 kubenswrapper[33013]: I0313 10:58:21.162663 33013 scope.go:117] "RemoveContainer" containerID="3e37f1f22df5c284c9d1ba661521c6c1d227be08ffa00372db4208f240cca432" Mar 13 10:58:21.163022 master-0 kubenswrapper[33013]: E0313 10:58:21.162939 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e37f1f22df5c284c9d1ba661521c6c1d227be08ffa00372db4208f240cca432\": container with ID starting with 3e37f1f22df5c284c9d1ba661521c6c1d227be08ffa00372db4208f240cca432 not found: ID does not exist" containerID="3e37f1f22df5c284c9d1ba661521c6c1d227be08ffa00372db4208f240cca432" Mar 13 10:58:21.163022 master-0 kubenswrapper[33013]: I0313 10:58:21.162958 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e37f1f22df5c284c9d1ba661521c6c1d227be08ffa00372db4208f240cca432"} err="failed to get container status \"3e37f1f22df5c284c9d1ba661521c6c1d227be08ffa00372db4208f240cca432\": rpc error: code = NotFound desc = could not find container \"3e37f1f22df5c284c9d1ba661521c6c1d227be08ffa00372db4208f240cca432\": container with ID starting with 3e37f1f22df5c284c9d1ba661521c6c1d227be08ffa00372db4208f240cca432 not found: ID does not exist" Mar 13 10:58:21.163022 master-0 kubenswrapper[33013]: I0313 10:58:21.162971 33013 scope.go:117] "RemoveContainer" containerID="20fb1b355eb09d56c57f7282f8707317b516c21df48ff00e5267c55875955f3a" Mar 13 10:58:21.185743 master-0 kubenswrapper[33013]: I0313 10:58:21.181372 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 10:58:21.185743 master-0 kubenswrapper[33013]: I0313 10:58:21.183822 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 10:58:21.218466 master-0 kubenswrapper[33013]: I0313 10:58:21.218430 33013 scope.go:117] "RemoveContainer" containerID="20fb1b355eb09d56c57f7282f8707317b516c21df48ff00e5267c55875955f3a" Mar 13 10:58:21.218939 master-0 kubenswrapper[33013]: E0313 10:58:21.218911 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20fb1b355eb09d56c57f7282f8707317b516c21df48ff00e5267c55875955f3a\": container with ID starting with 20fb1b355eb09d56c57f7282f8707317b516c21df48ff00e5267c55875955f3a not found: ID does not exist" containerID="20fb1b355eb09d56c57f7282f8707317b516c21df48ff00e5267c55875955f3a" Mar 13 10:58:21.219129 master-0 kubenswrapper[33013]: I0313 10:58:21.219103 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20fb1b355eb09d56c57f7282f8707317b516c21df48ff00e5267c55875955f3a"} err="failed to get container status \"20fb1b355eb09d56c57f7282f8707317b516c21df48ff00e5267c55875955f3a\": rpc error: code = NotFound desc = could not find container \"20fb1b355eb09d56c57f7282f8707317b516c21df48ff00e5267c55875955f3a\": container with ID starting with 20fb1b355eb09d56c57f7282f8707317b516c21df48ff00e5267c55875955f3a not found: ID does not exist" Mar 13 10:58:21.219304 master-0 kubenswrapper[33013]: I0313 10:58:21.219288 33013 scope.go:117] "RemoveContainer" containerID="dfbaca94957b1c7c9e9fa4fa8737e0b8700b98537257590b3b241ae68fa32dd8" Mar 13 10:58:21.228634 master-0 kubenswrapper[33013]: I0313 10:58:21.228570 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-867876d6b6-tpq67"] Mar 13 10:58:21.233091 master-0 kubenswrapper[33013]: I0313 10:58:21.233022 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-867876d6b6-tpq67"] Mar 13 10:58:21.239573 master-0 kubenswrapper[33013]: I0313 10:58:21.239522 33013 scope.go:117] "RemoveContainer" containerID="a8df02bb41a45b57cf8e71e70880ad2fbf324a4b46f7ee5205697f332f790983" Mar 13 10:58:21.250527 master-0 kubenswrapper[33013]: I0313 10:58:21.250464 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl"] Mar 13 10:58:21.262344 master-0 kubenswrapper[33013]: I0313 10:58:21.262233 33013 scope.go:117] "RemoveContainer" containerID="dfbaca94957b1c7c9e9fa4fa8737e0b8700b98537257590b3b241ae68fa32dd8" Mar 13 10:58:21.263220 master-0 kubenswrapper[33013]: E0313 10:58:21.263175 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfbaca94957b1c7c9e9fa4fa8737e0b8700b98537257590b3b241ae68fa32dd8\": container with ID starting with dfbaca94957b1c7c9e9fa4fa8737e0b8700b98537257590b3b241ae68fa32dd8 not found: ID does not exist" containerID="dfbaca94957b1c7c9e9fa4fa8737e0b8700b98537257590b3b241ae68fa32dd8" Mar 13 10:58:21.263289 master-0 kubenswrapper[33013]: I0313 10:58:21.263229 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfbaca94957b1c7c9e9fa4fa8737e0b8700b98537257590b3b241ae68fa32dd8"} err="failed to get container status \"dfbaca94957b1c7c9e9fa4fa8737e0b8700b98537257590b3b241ae68fa32dd8\": rpc error: code = NotFound desc = could not find container \"dfbaca94957b1c7c9e9fa4fa8737e0b8700b98537257590b3b241ae68fa32dd8\": container with ID starting with dfbaca94957b1c7c9e9fa4fa8737e0b8700b98537257590b3b241ae68fa32dd8 not found: ID does not exist" Mar 13 10:58:21.263289 master-0 kubenswrapper[33013]: I0313 10:58:21.263262 33013 scope.go:117] "RemoveContainer" containerID="a8df02bb41a45b57cf8e71e70880ad2fbf324a4b46f7ee5205697f332f790983" Mar 13 10:58:21.263764 master-0 kubenswrapper[33013]: E0313 10:58:21.263735 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8df02bb41a45b57cf8e71e70880ad2fbf324a4b46f7ee5205697f332f790983\": container with ID starting with a8df02bb41a45b57cf8e71e70880ad2fbf324a4b46f7ee5205697f332f790983 not found: ID does not exist" containerID="a8df02bb41a45b57cf8e71e70880ad2fbf324a4b46f7ee5205697f332f790983" Mar 13 10:58:21.263840 master-0 kubenswrapper[33013]: I0313 10:58:21.263762 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8df02bb41a45b57cf8e71e70880ad2fbf324a4b46f7ee5205697f332f790983"} err="failed to get container status \"a8df02bb41a45b57cf8e71e70880ad2fbf324a4b46f7ee5205697f332f790983\": rpc error: code = NotFound desc = could not find container \"a8df02bb41a45b57cf8e71e70880ad2fbf324a4b46f7ee5205697f332f790983\": container with ID starting with a8df02bb41a45b57cf8e71e70880ad2fbf324a4b46f7ee5205697f332f790983 not found: ID does not exist" Mar 13 10:58:21.265126 master-0 kubenswrapper[33013]: I0313 10:58:21.265095 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9bd68fd6-lwnzl"] Mar 13 10:58:21.330667 master-0 kubenswrapper[33013]: I0313 10:58:21.329302 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-b94667db7-z29mk"] Mar 13 10:58:21.373663 master-0 kubenswrapper[33013]: I0313 10:58:21.373617 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-dc44494b5-hphsz"] Mar 13 10:58:21.374229 master-0 kubenswrapper[33013]: E0313 10:58:21.374213 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09f42db-e6d7-469d-9761-88a879f6aa6b" containerName="route-controller-manager" Mar 13 10:58:21.374308 master-0 kubenswrapper[33013]: I0313 10:58:21.374297 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09f42db-e6d7-469d-9761-88a879f6aa6b" containerName="route-controller-manager" Mar 13 10:58:21.374381 master-0 kubenswrapper[33013]: E0313 10:58:21.374371 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" containerName="controller-manager" Mar 13 10:58:21.374434 master-0 kubenswrapper[33013]: I0313 10:58:21.374425 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" containerName="controller-manager" Mar 13 10:58:21.374503 master-0 kubenswrapper[33013]: E0313 10:58:21.374493 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" containerName="controller-manager" Mar 13 10:58:21.374558 master-0 kubenswrapper[33013]: I0313 10:58:21.374549 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" containerName="controller-manager" Mar 13 10:58:21.375612 master-0 kubenswrapper[33013]: E0313 10:58:21.375582 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c968c6b5-f045-4f52-80b0-15df67f4eba3" containerName="installer" Mar 13 10:58:21.375692 master-0 kubenswrapper[33013]: I0313 10:58:21.375682 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="c968c6b5-f045-4f52-80b0-15df67f4eba3" containerName="installer" Mar 13 10:58:21.375891 master-0 kubenswrapper[33013]: I0313 10:58:21.375879 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" containerName="controller-manager" Mar 13 10:58:21.376076 master-0 kubenswrapper[33013]: I0313 10:58:21.376060 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09f42db-e6d7-469d-9761-88a879f6aa6b" containerName="route-controller-manager" Mar 13 10:58:21.376296 master-0 kubenswrapper[33013]: I0313 10:58:21.376285 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" containerName="controller-manager" Mar 13 10:58:21.376365 master-0 kubenswrapper[33013]: I0313 10:58:21.376355 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="c968c6b5-f045-4f52-80b0-15df67f4eba3" containerName="installer" Mar 13 10:58:21.376437 master-0 kubenswrapper[33013]: I0313 10:58:21.376427 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09f42db-e6d7-469d-9761-88a879f6aa6b" containerName="route-controller-manager" Mar 13 10:58:21.376996 master-0 kubenswrapper[33013]: I0313 10:58:21.376980 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.389177 master-0 kubenswrapper[33013]: I0313 10:58:21.386013 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-dc44494b5-hphsz"] Mar 13 10:58:21.457755 master-0 kubenswrapper[33013]: I0313 10:58:21.457597 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-oauth-config\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.457755 master-0 kubenswrapper[33013]: I0313 10:58:21.457695 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-serving-cert\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.457755 master-0 kubenswrapper[33013]: I0313 10:58:21.457750 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-oauth-serving-cert\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.458024 master-0 kubenswrapper[33013]: I0313 10:58:21.457795 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-service-ca\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.458024 master-0 kubenswrapper[33013]: I0313 10:58:21.457825 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-config\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.458024 master-0 kubenswrapper[33013]: I0313 10:58:21.457843 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-trusted-ca-bundle\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.458024 master-0 kubenswrapper[33013]: I0313 10:58:21.457859 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mchd\" (UniqueName: \"kubernetes.io/projected/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-kube-api-access-4mchd\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.514323 master-0 kubenswrapper[33013]: I0313 10:58:21.514257 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-656877d9c5-dgdlt"] Mar 13 10:58:21.514761 master-0 kubenswrapper[33013]: E0313 10:58:21.514735 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09f42db-e6d7-469d-9761-88a879f6aa6b" containerName="route-controller-manager" Mar 13 10:58:21.514761 master-0 kubenswrapper[33013]: I0313 10:58:21.514760 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09f42db-e6d7-469d-9761-88a879f6aa6b" containerName="route-controller-manager" Mar 13 10:58:21.515523 master-0 kubenswrapper[33013]: I0313 10:58:21.515496 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.516966 master-0 kubenswrapper[33013]: I0313 10:58:21.516935 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv"] Mar 13 10:58:21.518100 master-0 kubenswrapper[33013]: I0313 10:58:21.518084 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:21.520740 master-0 kubenswrapper[33013]: I0313 10:58:21.518518 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-l5tkf" Mar 13 10:58:21.520740 master-0 kubenswrapper[33013]: I0313 10:58:21.519817 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 10:58:21.520740 master-0 kubenswrapper[33013]: I0313 10:58:21.519422 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 10:58:21.520740 master-0 kubenswrapper[33013]: I0313 10:58:21.520275 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 10:58:21.520740 master-0 kubenswrapper[33013]: I0313 10:58:21.520404 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-fn5mm" Mar 13 10:58:21.520740 master-0 kubenswrapper[33013]: I0313 10:58:21.520512 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 10:58:21.520740 master-0 kubenswrapper[33013]: I0313 10:58:21.520640 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 10:58:21.520740 master-0 kubenswrapper[33013]: I0313 10:58:21.520708 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 10:58:21.522640 master-0 kubenswrapper[33013]: I0313 10:58:21.520760 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 10:58:21.522640 master-0 kubenswrapper[33013]: I0313 10:58:21.520982 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 10:58:21.522640 master-0 kubenswrapper[33013]: I0313 10:58:21.521152 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 10:58:21.522640 master-0 kubenswrapper[33013]: I0313 10:58:21.521500 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 10:58:21.526483 master-0 kubenswrapper[33013]: I0313 10:58:21.526406 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 10:58:21.529813 master-0 kubenswrapper[33013]: I0313 10:58:21.529758 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv"] Mar 13 10:58:21.530858 master-0 kubenswrapper[33013]: I0313 10:58:21.530800 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-656877d9c5-dgdlt"] Mar 13 10:58:21.559464 master-0 kubenswrapper[33013]: I0313 10:58:21.559401 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38051e80-1d0d-4417-bbad-b2055ea3360e-config\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.559464 master-0 kubenswrapper[33013]: I0313 10:58:21.559507 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcv7z\" (UniqueName: \"kubernetes.io/projected/7e233342-a4d9-4a5b-b8cd-efe0a697cab7-kube-api-access-dcv7z\") pod \"route-controller-manager-6f49cf4456-jdhcv\" (UID: \"7e233342-a4d9-4a5b-b8cd-efe0a697cab7\") " pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:21.559974 master-0 kubenswrapper[33013]: I0313 10:58:21.559553 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-service-ca\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.559974 master-0 kubenswrapper[33013]: I0313 10:58:21.559580 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-config\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.559974 master-0 kubenswrapper[33013]: I0313 10:58:21.559629 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-trusted-ca-bundle\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.559974 master-0 kubenswrapper[33013]: I0313 10:58:21.559656 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mchd\" (UniqueName: \"kubernetes.io/projected/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-kube-api-access-4mchd\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.559974 master-0 kubenswrapper[33013]: I0313 10:58:21.559681 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38051e80-1d0d-4417-bbad-b2055ea3360e-client-ca\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.559974 master-0 kubenswrapper[33013]: I0313 10:58:21.559728 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e233342-a4d9-4a5b-b8cd-efe0a697cab7-config\") pod \"route-controller-manager-6f49cf4456-jdhcv\" (UID: \"7e233342-a4d9-4a5b-b8cd-efe0a697cab7\") " pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:21.559974 master-0 kubenswrapper[33013]: I0313 10:58:21.559760 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-oauth-config\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.560296 master-0 kubenswrapper[33013]: I0313 10:58:21.560056 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-serving-cert\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.560296 master-0 kubenswrapper[33013]: I0313 10:58:21.560130 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e233342-a4d9-4a5b-b8cd-efe0a697cab7-client-ca\") pod \"route-controller-manager-6f49cf4456-jdhcv\" (UID: \"7e233342-a4d9-4a5b-b8cd-efe0a697cab7\") " pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:21.560296 master-0 kubenswrapper[33013]: I0313 10:58:21.560201 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e233342-a4d9-4a5b-b8cd-efe0a697cab7-serving-cert\") pod \"route-controller-manager-6f49cf4456-jdhcv\" (UID: \"7e233342-a4d9-4a5b-b8cd-efe0a697cab7\") " pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:21.560442 master-0 kubenswrapper[33013]: I0313 10:58:21.560304 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tpdq\" (UniqueName: \"kubernetes.io/projected/38051e80-1d0d-4417-bbad-b2055ea3360e-kube-api-access-8tpdq\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.560442 master-0 kubenswrapper[33013]: I0313 10:58:21.560394 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-oauth-serving-cert\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.560681 master-0 kubenswrapper[33013]: I0313 10:58:21.560618 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38051e80-1d0d-4417-bbad-b2055ea3360e-proxy-ca-bundles\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.560762 master-0 kubenswrapper[33013]: I0313 10:58:21.560694 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38051e80-1d0d-4417-bbad-b2055ea3360e-serving-cert\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.561548 master-0 kubenswrapper[33013]: I0313 10:58:21.561482 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-config\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.561548 master-0 kubenswrapper[33013]: I0313 10:58:21.561517 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-oauth-serving-cert\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.561816 master-0 kubenswrapper[33013]: I0313 10:58:21.561785 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-trusted-ca-bundle\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.562306 master-0 kubenswrapper[33013]: I0313 10:58:21.562271 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-service-ca\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.563831 master-0 kubenswrapper[33013]: I0313 10:58:21.563772 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-serving-cert\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.566268 master-0 kubenswrapper[33013]: I0313 10:58:21.566207 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-oauth-config\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.595890 master-0 kubenswrapper[33013]: I0313 10:58:21.595687 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mchd\" (UniqueName: \"kubernetes.io/projected/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-kube-api-access-4mchd\") pod \"console-dc44494b5-hphsz\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.662164 master-0 kubenswrapper[33013]: I0313 10:58:21.662084 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e233342-a4d9-4a5b-b8cd-efe0a697cab7-client-ca\") pod \"route-controller-manager-6f49cf4456-jdhcv\" (UID: \"7e233342-a4d9-4a5b-b8cd-efe0a697cab7\") " pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:21.662164 master-0 kubenswrapper[33013]: I0313 10:58:21.662151 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e233342-a4d9-4a5b-b8cd-efe0a697cab7-serving-cert\") pod \"route-controller-manager-6f49cf4456-jdhcv\" (UID: \"7e233342-a4d9-4a5b-b8cd-efe0a697cab7\") " pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:21.662419 master-0 kubenswrapper[33013]: I0313 10:58:21.662299 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tpdq\" (UniqueName: \"kubernetes.io/projected/38051e80-1d0d-4417-bbad-b2055ea3360e-kube-api-access-8tpdq\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.663453 master-0 kubenswrapper[33013]: I0313 10:58:21.663401 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e233342-a4d9-4a5b-b8cd-efe0a697cab7-client-ca\") pod \"route-controller-manager-6f49cf4456-jdhcv\" (UID: \"7e233342-a4d9-4a5b-b8cd-efe0a697cab7\") " pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:21.664902 master-0 kubenswrapper[33013]: I0313 10:58:21.664454 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38051e80-1d0d-4417-bbad-b2055ea3360e-proxy-ca-bundles\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.664902 master-0 kubenswrapper[33013]: I0313 10:58:21.664488 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38051e80-1d0d-4417-bbad-b2055ea3360e-serving-cert\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.664902 master-0 kubenswrapper[33013]: I0313 10:58:21.664538 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38051e80-1d0d-4417-bbad-b2055ea3360e-config\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.664902 master-0 kubenswrapper[33013]: I0313 10:58:21.664576 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcv7z\" (UniqueName: \"kubernetes.io/projected/7e233342-a4d9-4a5b-b8cd-efe0a697cab7-kube-api-access-dcv7z\") pod \"route-controller-manager-6f49cf4456-jdhcv\" (UID: \"7e233342-a4d9-4a5b-b8cd-efe0a697cab7\") " pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:21.664902 master-0 kubenswrapper[33013]: I0313 10:58:21.664640 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38051e80-1d0d-4417-bbad-b2055ea3360e-client-ca\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.664902 master-0 kubenswrapper[33013]: I0313 10:58:21.664886 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e233342-a4d9-4a5b-b8cd-efe0a697cab7-config\") pod \"route-controller-manager-6f49cf4456-jdhcv\" (UID: \"7e233342-a4d9-4a5b-b8cd-efe0a697cab7\") " pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:21.665848 master-0 kubenswrapper[33013]: I0313 10:58:21.665824 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e233342-a4d9-4a5b-b8cd-efe0a697cab7-serving-cert\") pod \"route-controller-manager-6f49cf4456-jdhcv\" (UID: \"7e233342-a4d9-4a5b-b8cd-efe0a697cab7\") " pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:21.666197 master-0 kubenswrapper[33013]: I0313 10:58:21.666154 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38051e80-1d0d-4417-bbad-b2055ea3360e-config\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.666662 master-0 kubenswrapper[33013]: I0313 10:58:21.666634 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38051e80-1d0d-4417-bbad-b2055ea3360e-client-ca\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.667372 master-0 kubenswrapper[33013]: I0313 10:58:21.667327 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e233342-a4d9-4a5b-b8cd-efe0a697cab7-config\") pod \"route-controller-manager-6f49cf4456-jdhcv\" (UID: \"7e233342-a4d9-4a5b-b8cd-efe0a697cab7\") " pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:21.667841 master-0 kubenswrapper[33013]: I0313 10:58:21.667807 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38051e80-1d0d-4417-bbad-b2055ea3360e-proxy-ca-bundles\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.669076 master-0 kubenswrapper[33013]: I0313 10:58:21.669035 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38051e80-1d0d-4417-bbad-b2055ea3360e-serving-cert\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.680105 master-0 kubenswrapper[33013]: I0313 10:58:21.680057 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tpdq\" (UniqueName: \"kubernetes.io/projected/38051e80-1d0d-4417-bbad-b2055ea3360e-kube-api-access-8tpdq\") pod \"controller-manager-656877d9c5-dgdlt\" (UID: \"38051e80-1d0d-4417-bbad-b2055ea3360e\") " pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.682215 master-0 kubenswrapper[33013]: I0313 10:58:21.682174 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcv7z\" (UniqueName: \"kubernetes.io/projected/7e233342-a4d9-4a5b-b8cd-efe0a697cab7-kube-api-access-dcv7z\") pod \"route-controller-manager-6f49cf4456-jdhcv\" (UID: \"7e233342-a4d9-4a5b-b8cd-efe0a697cab7\") " pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:21.706734 master-0 kubenswrapper[33013]: I0313 10:58:21.706680 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:21.853849 master-0 kubenswrapper[33013]: I0313 10:58:21.852800 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:21.856990 master-0 kubenswrapper[33013]: I0313 10:58:21.856107 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:22.196531 master-0 kubenswrapper[33013]: I0313 10:58:22.196487 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-dc44494b5-hphsz"] Mar 13 10:58:22.310813 master-0 kubenswrapper[33013]: I0313 10:58:22.310778 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv"] Mar 13 10:58:22.361967 master-0 kubenswrapper[33013]: I0313 10:58:22.361893 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-656877d9c5-dgdlt"] Mar 13 10:58:22.372614 master-0 kubenswrapper[33013]: W0313 10:58:22.372542 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38051e80_1d0d_4417_bbad_b2055ea3360e.slice/crio-e664f7044eb191be8381900fd222c0982faed2dccde37607f0da715e50c30901 WatchSource:0}: Error finding container e664f7044eb191be8381900fd222c0982faed2dccde37607f0da715e50c30901: Status 404 returned error can't find the container with id e664f7044eb191be8381900fd222c0982faed2dccde37607f0da715e50c30901 Mar 13 10:58:22.727243 master-0 kubenswrapper[33013]: I0313 10:58:22.727172 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc" path="/var/lib/kubelet/pods/a1f4fd8d-3ee0-44f7-a94a-ffa0658cc2bc/volumes" Mar 13 10:58:22.746618 master-0 kubenswrapper[33013]: I0313 10:58:22.742721 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09f42db-e6d7-469d-9761-88a879f6aa6b" path="/var/lib/kubelet/pods/c09f42db-e6d7-469d-9761-88a879f6aa6b/volumes" Mar 13 10:58:22.746618 master-0 kubenswrapper[33013]: I0313 10:58:22.743439 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c968c6b5-f045-4f52-80b0-15df67f4eba3" path="/var/lib/kubelet/pods/c968c6b5-f045-4f52-80b0-15df67f4eba3/volumes" Mar 13 10:58:23.147323 master-0 kubenswrapper[33013]: I0313 10:58:23.147218 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" event={"ID":"38051e80-1d0d-4417-bbad-b2055ea3360e","Type":"ContainerStarted","Data":"ad08fe6237bbba00a86d296ac43ac06b06f10eb71ea706ef89444f10da18f436"} Mar 13 10:58:23.147323 master-0 kubenswrapper[33013]: I0313 10:58:23.147312 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" event={"ID":"38051e80-1d0d-4417-bbad-b2055ea3360e","Type":"ContainerStarted","Data":"e664f7044eb191be8381900fd222c0982faed2dccde37607f0da715e50c30901"} Mar 13 10:58:23.148205 master-0 kubenswrapper[33013]: I0313 10:58:23.147746 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:23.150243 master-0 kubenswrapper[33013]: I0313 10:58:23.150202 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" event={"ID":"7e233342-a4d9-4a5b-b8cd-efe0a697cab7","Type":"ContainerStarted","Data":"e382028eb20b8a5e4c62ad64e949d1135cfc9b80b61adc08cf85af726143d766"} Mar 13 10:58:23.150243 master-0 kubenswrapper[33013]: I0313 10:58:23.150243 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" event={"ID":"7e233342-a4d9-4a5b-b8cd-efe0a697cab7","Type":"ContainerStarted","Data":"07268ad40961acff8c64e34dd07d0cfea236e6cef156169b2774c44b06c2d10f"} Mar 13 10:58:23.152457 master-0 kubenswrapper[33013]: I0313 10:58:23.152393 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-dc44494b5-hphsz" event={"ID":"fbe2e6b6-cd6e-490b-b89e-ed78463012e3","Type":"ContainerStarted","Data":"4a18dd10aa2bd5ad5912ef46ab2e43f10b398b974a4eef2d39a9131f286e217f"} Mar 13 10:58:23.152457 master-0 kubenswrapper[33013]: I0313 10:58:23.152449 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-dc44494b5-hphsz" event={"ID":"fbe2e6b6-cd6e-490b-b89e-ed78463012e3","Type":"ContainerStarted","Data":"448b3e4908ad0d7658cd7f0a85b6e62c8bf7fd7d24209557f14656b451605f6b"} Mar 13 10:58:23.157041 master-0 kubenswrapper[33013]: I0313 10:58:23.156780 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" Mar 13 10:58:23.172178 master-0 kubenswrapper[33013]: I0313 10:58:23.172057 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-656877d9c5-dgdlt" podStartSLOduration=3.17202509 podStartE2EDuration="3.17202509s" podCreationTimestamp="2026-03-13 10:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:58:23.17094557 +0000 UTC m=+86.646898939" watchObservedRunningTime="2026-03-13 10:58:23.17202509 +0000 UTC m=+86.647978439" Mar 13 10:58:23.196798 master-0 kubenswrapper[33013]: I0313 10:58:23.196713 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-dc44494b5-hphsz" podStartSLOduration=2.1966979110000002 podStartE2EDuration="2.196697911s" podCreationTimestamp="2026-03-13 10:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:58:23.195816606 +0000 UTC m=+86.671769955" watchObservedRunningTime="2026-03-13 10:58:23.196697911 +0000 UTC m=+86.672651250" Mar 13 10:58:23.220328 master-0 kubenswrapper[33013]: I0313 10:58:23.220229 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" podStartSLOduration=3.220201229 podStartE2EDuration="3.220201229s" podCreationTimestamp="2026-03-13 10:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:58:23.217350749 +0000 UTC m=+86.693304098" watchObservedRunningTime="2026-03-13 10:58:23.220201229 +0000 UTC m=+86.696154578" Mar 13 10:58:24.158427 master-0 kubenswrapper[33013]: I0313 10:58:24.158380 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:24.162558 master-0 kubenswrapper[33013]: I0313 10:58:24.162520 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f49cf4456-jdhcv" Mar 13 10:58:26.483922 master-0 kubenswrapper[33013]: I0313 10:58:26.483868 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 10:58:26.484794 master-0 kubenswrapper[33013]: I0313 10:58:26.484773 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 10:58:26.496040 master-0 kubenswrapper[33013]: I0313 10:58:26.495871 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 10:58:26.496040 master-0 kubenswrapper[33013]: I0313 10:58:26.495929 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-525r2" Mar 13 10:58:26.559199 master-0 kubenswrapper[33013]: I0313 10:58:26.559117 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82b14909-f429-41f4-aef4-771e17ecab97-var-lock\") pod \"installer-5-master-0\" (UID: \"82b14909-f429-41f4-aef4-771e17ecab97\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 10:58:26.559494 master-0 kubenswrapper[33013]: I0313 10:58:26.559214 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82b14909-f429-41f4-aef4-771e17ecab97-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"82b14909-f429-41f4-aef4-771e17ecab97\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 10:58:26.559494 master-0 kubenswrapper[33013]: I0313 10:58:26.559255 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82b14909-f429-41f4-aef4-771e17ecab97-kube-api-access\") pod \"installer-5-master-0\" (UID: \"82b14909-f429-41f4-aef4-771e17ecab97\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 10:58:26.660928 master-0 kubenswrapper[33013]: I0313 10:58:26.660835 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82b14909-f429-41f4-aef4-771e17ecab97-var-lock\") pod \"installer-5-master-0\" (UID: \"82b14909-f429-41f4-aef4-771e17ecab97\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 10:58:26.661214 master-0 kubenswrapper[33013]: I0313 10:58:26.660959 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82b14909-f429-41f4-aef4-771e17ecab97-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"82b14909-f429-41f4-aef4-771e17ecab97\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 10:58:26.661214 master-0 kubenswrapper[33013]: I0313 10:58:26.660982 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82b14909-f429-41f4-aef4-771e17ecab97-var-lock\") pod \"installer-5-master-0\" (UID: \"82b14909-f429-41f4-aef4-771e17ecab97\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 10:58:26.661214 master-0 kubenswrapper[33013]: I0313 10:58:26.661009 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82b14909-f429-41f4-aef4-771e17ecab97-kube-api-access\") pod \"installer-5-master-0\" (UID: \"82b14909-f429-41f4-aef4-771e17ecab97\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 10:58:26.661316 master-0 kubenswrapper[33013]: I0313 10:58:26.661175 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82b14909-f429-41f4-aef4-771e17ecab97-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"82b14909-f429-41f4-aef4-771e17ecab97\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 10:58:27.231639 master-0 kubenswrapper[33013]: I0313 10:58:27.230869 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 10:58:28.484567 master-0 kubenswrapper[33013]: I0313 10:58:28.484395 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82b14909-f429-41f4-aef4-771e17ecab97-kube-api-access\") pod \"installer-5-master-0\" (UID: \"82b14909-f429-41f4-aef4-771e17ecab97\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 10:58:28.614972 master-0 kubenswrapper[33013]: I0313 10:58:28.614927 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 10:58:29.768717 master-0 kubenswrapper[33013]: I0313 10:58:29.768485 33013 patch_prober.go:28] interesting pod/console-66b864759f-6clbz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 13 10:58:29.768717 master-0 kubenswrapper[33013]: I0313 10:58:29.768542 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-66b864759f-6clbz" podUID="aca8f47b-7610-492c-bf79-a7e598b07054" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 13 10:58:30.566149 master-0 kubenswrapper[33013]: I0313 10:58:30.566064 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 10:58:30.578034 master-0 kubenswrapper[33013]: W0313 10:58:30.577968 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod82b14909_f429_41f4_aef4_771e17ecab97.slice/crio-b361a1850d0d10337a4bbff3b2a2cc59282548db5f49500a3659408f3dc04943 WatchSource:0}: Error finding container b361a1850d0d10337a4bbff3b2a2cc59282548db5f49500a3659408f3dc04943: Status 404 returned error can't find the container with id b361a1850d0d10337a4bbff3b2a2cc59282548db5f49500a3659408f3dc04943 Mar 13 10:58:31.215810 master-0 kubenswrapper[33013]: I0313 10:58:31.215739 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"82b14909-f429-41f4-aef4-771e17ecab97","Type":"ContainerStarted","Data":"b361a1850d0d10337a4bbff3b2a2cc59282548db5f49500a3659408f3dc04943"} Mar 13 10:58:31.708038 master-0 kubenswrapper[33013]: I0313 10:58:31.707990 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:31.708314 master-0 kubenswrapper[33013]: I0313 10:58:31.708302 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-dc44494b5-hphsz" Mar 13 10:58:31.709680 master-0 kubenswrapper[33013]: I0313 10:58:31.709640 33013 patch_prober.go:28] interesting pod/console-dc44494b5-hphsz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 10:58:31.709760 master-0 kubenswrapper[33013]: I0313 10:58:31.709698 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dc44494b5-hphsz" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 10:58:32.225736 master-0 kubenswrapper[33013]: I0313 10:58:32.225613 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"82b14909-f429-41f4-aef4-771e17ecab97","Type":"ContainerStarted","Data":"fa30c0ad3b98a44649ad8702f99556d670ed09b926d3ba1835f198c6d54d30aa"} Mar 13 10:58:32.244226 master-0 kubenswrapper[33013]: I0313 10:58:32.244149 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=7.244131132 podStartE2EDuration="7.244131132s" podCreationTimestamp="2026-03-13 10:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:58:32.241121728 +0000 UTC m=+95.717075077" watchObservedRunningTime="2026-03-13 10:58:32.244131132 +0000 UTC m=+95.720084481" Mar 13 10:58:39.768900 master-0 kubenswrapper[33013]: I0313 10:58:39.768820 33013 patch_prober.go:28] interesting pod/console-66b864759f-6clbz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 13 10:58:39.768900 master-0 kubenswrapper[33013]: I0313 10:58:39.768890 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-66b864759f-6clbz" podUID="aca8f47b-7610-492c-bf79-a7e598b07054" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 13 10:58:41.183868 master-0 kubenswrapper[33013]: I0313 10:58:41.183799 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 10:58:41.184424 master-0 kubenswrapper[33013]: I0313 10:58:41.184049 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-5-master-0" podUID="82b14909-f429-41f4-aef4-771e17ecab97" containerName="installer" containerID="cri-o://fa30c0ad3b98a44649ad8702f99556d670ed09b926d3ba1835f198c6d54d30aa" gracePeriod=30 Mar 13 10:58:41.708107 master-0 kubenswrapper[33013]: I0313 10:58:41.708048 33013 patch_prober.go:28] interesting pod/console-dc44494b5-hphsz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 10:58:41.708338 master-0 kubenswrapper[33013]: I0313 10:58:41.708139 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dc44494b5-hphsz" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 10:58:44.380162 master-0 kubenswrapper[33013]: I0313 10:58:44.380072 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 13 10:58:44.381285 master-0 kubenswrapper[33013]: I0313 10:58:44.381244 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 10:58:44.402320 master-0 kubenswrapper[33013]: I0313 10:58:44.402264 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 13 10:58:44.451451 master-0 kubenswrapper[33013]: I0313 10:58:44.451365 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 10:58:44.451451 master-0 kubenswrapper[33013]: I0313 10:58:44.451430 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-kube-api-access\") pod \"installer-6-master-0\" (UID: \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 10:58:44.451451 master-0 kubenswrapper[33013]: I0313 10:58:44.451465 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-var-lock\") pod \"installer-6-master-0\" (UID: \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 10:58:44.554306 master-0 kubenswrapper[33013]: I0313 10:58:44.554212 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 10:58:44.554306 master-0 kubenswrapper[33013]: I0313 10:58:44.554294 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-kube-api-access\") pod \"installer-6-master-0\" (UID: \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 10:58:44.554652 master-0 kubenswrapper[33013]: I0313 10:58:44.554431 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 10:58:44.554652 master-0 kubenswrapper[33013]: I0313 10:58:44.554494 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-var-lock\") pod \"installer-6-master-0\" (UID: \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 10:58:44.554652 master-0 kubenswrapper[33013]: I0313 10:58:44.554640 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-var-lock\") pod \"installer-6-master-0\" (UID: \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 10:58:44.571023 master-0 kubenswrapper[33013]: I0313 10:58:44.570970 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-kube-api-access\") pod \"installer-6-master-0\" (UID: \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 10:58:44.710167 master-0 kubenswrapper[33013]: I0313 10:58:44.710051 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 10:58:45.146828 master-0 kubenswrapper[33013]: I0313 10:58:45.146629 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 13 10:58:45.152965 master-0 kubenswrapper[33013]: W0313 10:58:45.152144 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcd6c4b8c_418d_4f69_9a1f_ebd0ee56daec.slice/crio-e062ff9920fcd6c629d3cc5f823b82a40372812589bb78603cad4785b8ce5996 WatchSource:0}: Error finding container e062ff9920fcd6c629d3cc5f823b82a40372812589bb78603cad4785b8ce5996: Status 404 returned error can't find the container with id e062ff9920fcd6c629d3cc5f823b82a40372812589bb78603cad4785b8ce5996 Mar 13 10:58:45.156383 master-0 kubenswrapper[33013]: I0313 10:58:45.156332 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" podUID="7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" containerName="oauth-openshift" containerID="cri-o://48c965cc632a0a17ec00213143fda2c334316e2d7519bf2bd16e390d52d01130" gracePeriod=15 Mar 13 10:58:45.321872 master-0 kubenswrapper[33013]: I0313 10:58:45.321823 33013 generic.go:334] "Generic (PLEG): container finished" podID="7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" containerID="48c965cc632a0a17ec00213143fda2c334316e2d7519bf2bd16e390d52d01130" exitCode=0 Mar 13 10:58:45.322047 master-0 kubenswrapper[33013]: I0313 10:58:45.321901 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" event={"ID":"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf","Type":"ContainerDied","Data":"48c965cc632a0a17ec00213143fda2c334316e2d7519bf2bd16e390d52d01130"} Mar 13 10:58:45.323104 master-0 kubenswrapper[33013]: I0313 10:58:45.323071 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec","Type":"ContainerStarted","Data":"e062ff9920fcd6c629d3cc5f823b82a40372812589bb78603cad4785b8ce5996"} Mar 13 10:58:45.681171 master-0 kubenswrapper[33013]: I0313 10:58:45.681103 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:45.774571 master-0 kubenswrapper[33013]: I0313 10:58:45.774419 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5"] Mar 13 10:58:45.774966 master-0 kubenswrapper[33013]: E0313 10:58:45.774913 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" containerName="oauth-openshift" Mar 13 10:58:45.774966 master-0 kubenswrapper[33013]: I0313 10:58:45.774940 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" containerName="oauth-openshift" Mar 13 10:58:45.775121 master-0 kubenswrapper[33013]: I0313 10:58:45.775076 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-audit-dir\") pod \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " Mar 13 10:58:45.775176 master-0 kubenswrapper[33013]: I0313 10:58:45.775138 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-session\") pod \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " Mar 13 10:58:45.775176 master-0 kubenswrapper[33013]: I0313 10:58:45.775163 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" containerName="oauth-openshift" Mar 13 10:58:45.775778 master-0 kubenswrapper[33013]: I0313 10:58:45.775738 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.775164 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fhpm\" (UniqueName: \"kubernetes.io/projected/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-kube-api-access-9fhpm\") pod \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.779193 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-trusted-ca-bundle\") pod \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.779267 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-ocp-branding-template\") pod \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.779346 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-provider-selection\") pod \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.779405 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-error\") pod \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.779469 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-login\") pod \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.779513 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-service-ca\") pod \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.779548 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-audit-policies\") pod \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.779574 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-cliconfig\") pod \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.779604 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-router-certs\") pod \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.779667 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-serving-cert\") pod \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\" (UID: \"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf\") " Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.779927 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" (UID: "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.780191 33013 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.780250 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" (UID: "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.781806 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" (UID: "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.783949 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" (UID: "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.784066 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" (UID: "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.784153 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" (UID: "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.784870 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" (UID: "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:58:45.785307 master-0 kubenswrapper[33013]: I0313 10:58:45.785120 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" (UID: "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:58:45.788164 master-0 kubenswrapper[33013]: I0313 10:58:45.787409 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" (UID: "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:58:45.788797 master-0 kubenswrapper[33013]: I0313 10:58:45.788363 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-kube-api-access-9fhpm" (OuterVolumeSpecName: "kube-api-access-9fhpm") pod "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" (UID: "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf"). InnerVolumeSpecName "kube-api-access-9fhpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:58:45.796855 master-0 kubenswrapper[33013]: I0313 10:58:45.788707 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" (UID: "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:58:45.797093 master-0 kubenswrapper[33013]: I0313 10:58:45.792640 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" (UID: "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:58:45.798060 master-0 kubenswrapper[33013]: I0313 10:58:45.797974 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" (UID: "7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:58:45.852663 master-0 kubenswrapper[33013]: I0313 10:58:45.851589 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5"] Mar 13 10:58:45.882198 master-0 kubenswrapper[33013]: I0313 10:58:45.882140 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.882198 master-0 kubenswrapper[33013]: I0313 10:58:45.882187 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-session\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.882198 master-0 kubenswrapper[33013]: I0313 10:58:45.882207 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4klh\" (UniqueName: \"kubernetes.io/projected/78a9191c-4284-4758-b0c0-6e413dd707ae-kube-api-access-n4klh\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.882502 master-0 kubenswrapper[33013]: I0313 10:58:45.882240 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.882502 master-0 kubenswrapper[33013]: I0313 10:58:45.882266 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.882502 master-0 kubenswrapper[33013]: I0313 10:58:45.882317 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-user-template-error\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.882502 master-0 kubenswrapper[33013]: I0313 10:58:45.882340 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78a9191c-4284-4758-b0c0-6e413dd707ae-audit-policies\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.882643 master-0 kubenswrapper[33013]: I0313 10:58:45.882475 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.882682 master-0 kubenswrapper[33013]: I0313 10:58:45.882662 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.882802 master-0 kubenswrapper[33013]: I0313 10:58:45.882775 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.882843 master-0 kubenswrapper[33013]: I0313 10:58:45.882804 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.882882 master-0 kubenswrapper[33013]: I0313 10:58:45.882825 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-user-template-login\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.883010 master-0 kubenswrapper[33013]: I0313 10:58:45.882964 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78a9191c-4284-4758-b0c0-6e413dd707ae-audit-dir\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.883130 master-0 kubenswrapper[33013]: I0313 10:58:45.883117 33013 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:45.883183 master-0 kubenswrapper[33013]: I0313 10:58:45.883131 33013 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:45.883183 master-0 kubenswrapper[33013]: I0313 10:58:45.883142 33013 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:45.883183 master-0 kubenswrapper[33013]: I0313 10:58:45.883154 33013 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:45.883183 master-0 kubenswrapper[33013]: I0313 10:58:45.883164 33013 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:45.883183 master-0 kubenswrapper[33013]: I0313 10:58:45.883176 33013 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:45.883375 master-0 kubenswrapper[33013]: I0313 10:58:45.883185 33013 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:45.883375 master-0 kubenswrapper[33013]: I0313 10:58:45.883194 33013 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:45.883375 master-0 kubenswrapper[33013]: I0313 10:58:45.883206 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fhpm\" (UniqueName: \"kubernetes.io/projected/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-kube-api-access-9fhpm\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:45.883375 master-0 kubenswrapper[33013]: I0313 10:58:45.883218 33013 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:45.883375 master-0 kubenswrapper[33013]: I0313 10:58:45.883230 33013 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:45.883375 master-0 kubenswrapper[33013]: I0313 10:58:45.883239 33013 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:45.984734 master-0 kubenswrapper[33013]: I0313 10:58:45.984647 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.985000 master-0 kubenswrapper[33013]: I0313 10:58:45.984891 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.985000 master-0 kubenswrapper[33013]: I0313 10:58:45.984960 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.985669 master-0 kubenswrapper[33013]: I0313 10:58:45.985620 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.985669 master-0 kubenswrapper[33013]: I0313 10:58:45.985652 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-user-template-login\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.985824 master-0 kubenswrapper[33013]: I0313 10:58:45.985589 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.985824 master-0 kubenswrapper[33013]: I0313 10:58:45.985684 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78a9191c-4284-4758-b0c0-6e413dd707ae-audit-dir\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.985824 master-0 kubenswrapper[33013]: I0313 10:58:45.985760 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.985824 master-0 kubenswrapper[33013]: I0313 10:58:45.985790 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-session\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.985824 master-0 kubenswrapper[33013]: I0313 10:58:45.985806 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4klh\" (UniqueName: \"kubernetes.io/projected/78a9191c-4284-4758-b0c0-6e413dd707ae-kube-api-access-n4klh\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.986108 master-0 kubenswrapper[33013]: I0313 10:58:45.985855 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.986108 master-0 kubenswrapper[33013]: I0313 10:58:45.985891 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78a9191c-4284-4758-b0c0-6e413dd707ae-audit-dir\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.986108 master-0 kubenswrapper[33013]: I0313 10:58:45.985930 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.986108 master-0 kubenswrapper[33013]: I0313 10:58:45.986022 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-user-template-error\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.986108 master-0 kubenswrapper[33013]: I0313 10:58:45.986060 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78a9191c-4284-4758-b0c0-6e413dd707ae-audit-policies\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.987319 master-0 kubenswrapper[33013]: I0313 10:58:45.987038 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78a9191c-4284-4758-b0c0-6e413dd707ae-audit-policies\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.987319 master-0 kubenswrapper[33013]: I0313 10:58:45.987276 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.988209 master-0 kubenswrapper[33013]: I0313 10:58:45.988181 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.988819 master-0 kubenswrapper[33013]: I0313 10:58:45.988776 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-user-template-login\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.989056 master-0 kubenswrapper[33013]: I0313 10:58:45.989027 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.989174 master-0 kubenswrapper[33013]: I0313 10:58:45.989120 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.989299 master-0 kubenswrapper[33013]: I0313 10:58:45.989271 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-session\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.990593 master-0 kubenswrapper[33013]: I0313 10:58:45.990530 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-user-template-error\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.991090 master-0 kubenswrapper[33013]: I0313 10:58:45.991051 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:45.992231 master-0 kubenswrapper[33013]: I0313 10:58:45.992208 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78a9191c-4284-4758-b0c0-6e413dd707ae-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:46.001457 master-0 kubenswrapper[33013]: I0313 10:58:46.001435 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4klh\" (UniqueName: \"kubernetes.io/projected/78a9191c-4284-4758-b0c0-6e413dd707ae-kube-api-access-n4klh\") pod \"oauth-openshift-7f94bd6c98-f4rh5\" (UID: \"78a9191c-4284-4758-b0c0-6e413dd707ae\") " pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:46.156981 master-0 kubenswrapper[33013]: I0313 10:58:46.156904 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:46.329680 master-0 kubenswrapper[33013]: I0313 10:58:46.329605 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec","Type":"ContainerStarted","Data":"e3ce065ad028c0344758cccfff61d17b7e1a4d76732973ffd4853cb73b5c4fc3"} Mar 13 10:58:46.331716 master-0 kubenswrapper[33013]: I0313 10:58:46.331090 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" event={"ID":"7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf","Type":"ContainerDied","Data":"00902a833b73ae1850a1abba57e5aa445610845174b26c8effec37c5fd553e66"} Mar 13 10:58:46.331716 master-0 kubenswrapper[33013]: I0313 10:58:46.331120 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6758ccc497-88c27" Mar 13 10:58:46.331716 master-0 kubenswrapper[33013]: I0313 10:58:46.331147 33013 scope.go:117] "RemoveContainer" containerID="48c965cc632a0a17ec00213143fda2c334316e2d7519bf2bd16e390d52d01130" Mar 13 10:58:46.455374 master-0 kubenswrapper[33013]: I0313 10:58:46.455051 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-b94667db7-z29mk" podUID="6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" containerName="console" containerID="cri-o://c8ccec5bf5ecabc58f8cd1c8db0bfdac19198b240f9421965683ed2c1532e1a6" gracePeriod=15 Mar 13 10:58:46.465256 master-0 kubenswrapper[33013]: I0313 10:58:46.465120 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=2.465101446 podStartE2EDuration="2.465101446s" podCreationTimestamp="2026-03-13 10:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:58:46.462322608 +0000 UTC m=+109.938275957" watchObservedRunningTime="2026-03-13 10:58:46.465101446 +0000 UTC m=+109.941054795" Mar 13 10:58:46.482927 master-0 kubenswrapper[33013]: I0313 10:58:46.482841 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6758ccc497-88c27"] Mar 13 10:58:46.490461 master-0 kubenswrapper[33013]: I0313 10:58:46.490389 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-6758ccc497-88c27"] Mar 13 10:58:46.544999 master-0 kubenswrapper[33013]: I0313 10:58:46.544917 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5"] Mar 13 10:58:46.550373 master-0 kubenswrapper[33013]: W0313 10:58:46.550279 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78a9191c_4284_4758_b0c0_6e413dd707ae.slice/crio-e57cc631f775bcd2aad65ce50ad8926bad67d66ec32db8662e29f198ff94ec4b WatchSource:0}: Error finding container e57cc631f775bcd2aad65ce50ad8926bad67d66ec32db8662e29f198ff94ec4b: Status 404 returned error can't find the container with id e57cc631f775bcd2aad65ce50ad8926bad67d66ec32db8662e29f198ff94ec4b Mar 13 10:58:46.722632 master-0 kubenswrapper[33013]: I0313 10:58:46.722559 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf" path="/var/lib/kubelet/pods/7ae8bc18-77d6-45b7-81e6-16cd1c3b1abf/volumes" Mar 13 10:58:46.998167 master-0 kubenswrapper[33013]: I0313 10:58:46.998038 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-b94667db7-z29mk_6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0/console/0.log" Mar 13 10:58:46.998167 master-0 kubenswrapper[33013]: I0313 10:58:46.998152 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:58:47.102042 master-0 kubenswrapper[33013]: I0313 10:58:47.101974 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77vqt\" (UniqueName: \"kubernetes.io/projected/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-kube-api-access-77vqt\") pod \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " Mar 13 10:58:47.102376 master-0 kubenswrapper[33013]: I0313 10:58:47.102084 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-oauth-serving-cert\") pod \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " Mar 13 10:58:47.102376 master-0 kubenswrapper[33013]: I0313 10:58:47.102145 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-config\") pod \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " Mar 13 10:58:47.102376 master-0 kubenswrapper[33013]: I0313 10:58:47.102198 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-serving-cert\") pod \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " Mar 13 10:58:47.102376 master-0 kubenswrapper[33013]: I0313 10:58:47.102230 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-service-ca\") pod \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " Mar 13 10:58:47.102376 master-0 kubenswrapper[33013]: I0313 10:58:47.102272 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-oauth-config\") pod \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\" (UID: \"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0\") " Mar 13 10:58:47.103255 master-0 kubenswrapper[33013]: I0313 10:58:47.103193 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-config" (OuterVolumeSpecName: "console-config") pod "6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" (UID: "6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:58:47.103494 master-0 kubenswrapper[33013]: I0313 10:58:47.103456 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-service-ca" (OuterVolumeSpecName: "service-ca") pod "6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" (UID: "6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:58:47.103765 master-0 kubenswrapper[33013]: I0313 10:58:47.103729 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" (UID: "6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:58:47.106541 master-0 kubenswrapper[33013]: I0313 10:58:47.106500 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" (UID: "6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:58:47.106992 master-0 kubenswrapper[33013]: I0313 10:58:47.106937 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-kube-api-access-77vqt" (OuterVolumeSpecName: "kube-api-access-77vqt") pod "6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" (UID: "6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0"). InnerVolumeSpecName "kube-api-access-77vqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:58:47.107760 master-0 kubenswrapper[33013]: I0313 10:58:47.107719 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" (UID: "6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:58:47.204601 master-0 kubenswrapper[33013]: I0313 10:58:47.204535 33013 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:47.204601 master-0 kubenswrapper[33013]: I0313 10:58:47.204585 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77vqt\" (UniqueName: \"kubernetes.io/projected/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-kube-api-access-77vqt\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:47.204601 master-0 kubenswrapper[33013]: I0313 10:58:47.204608 33013 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:47.204918 master-0 kubenswrapper[33013]: I0313 10:58:47.204624 33013 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:47.204918 master-0 kubenswrapper[33013]: I0313 10:58:47.204635 33013 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:47.204918 master-0 kubenswrapper[33013]: I0313 10:58:47.204645 33013 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:58:47.342183 master-0 kubenswrapper[33013]: I0313 10:58:47.342130 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-b94667db7-z29mk_6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0/console/0.log" Mar 13 10:58:47.342183 master-0 kubenswrapper[33013]: I0313 10:58:47.342183 33013 generic.go:334] "Generic (PLEG): container finished" podID="6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" containerID="c8ccec5bf5ecabc58f8cd1c8db0bfdac19198b240f9421965683ed2c1532e1a6" exitCode=2 Mar 13 10:58:47.342644 master-0 kubenswrapper[33013]: I0313 10:58:47.342262 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b94667db7-z29mk" event={"ID":"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0","Type":"ContainerDied","Data":"c8ccec5bf5ecabc58f8cd1c8db0bfdac19198b240f9421965683ed2c1532e1a6"} Mar 13 10:58:47.342644 master-0 kubenswrapper[33013]: I0313 10:58:47.342309 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b94667db7-z29mk" event={"ID":"6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0","Type":"ContainerDied","Data":"1f7d2e8e2c19744fd3f8a62cfb3d5ed3ccfa3196383dd67998d63322c2273657"} Mar 13 10:58:47.342644 master-0 kubenswrapper[33013]: I0313 10:58:47.342329 33013 scope.go:117] "RemoveContainer" containerID="c8ccec5bf5ecabc58f8cd1c8db0bfdac19198b240f9421965683ed2c1532e1a6" Mar 13 10:58:47.342644 master-0 kubenswrapper[33013]: I0313 10:58:47.342320 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b94667db7-z29mk" Mar 13 10:58:47.344364 master-0 kubenswrapper[33013]: I0313 10:58:47.344302 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" event={"ID":"78a9191c-4284-4758-b0c0-6e413dd707ae","Type":"ContainerStarted","Data":"28e55a37d8d995bb49b3d0bc59b1c6518902e099de5d8ffb39912c8ce1a1f95b"} Mar 13 10:58:47.345331 master-0 kubenswrapper[33013]: I0313 10:58:47.344369 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" event={"ID":"78a9191c-4284-4758-b0c0-6e413dd707ae","Type":"ContainerStarted","Data":"e57cc631f775bcd2aad65ce50ad8926bad67d66ec32db8662e29f198ff94ec4b"} Mar 13 10:58:47.345331 master-0 kubenswrapper[33013]: I0313 10:58:47.344823 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:47.362210 master-0 kubenswrapper[33013]: I0313 10:58:47.362151 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" Mar 13 10:58:47.365466 master-0 kubenswrapper[33013]: I0313 10:58:47.365340 33013 scope.go:117] "RemoveContainer" containerID="c8ccec5bf5ecabc58f8cd1c8db0bfdac19198b240f9421965683ed2c1532e1a6" Mar 13 10:58:47.365968 master-0 kubenswrapper[33013]: E0313 10:58:47.365937 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8ccec5bf5ecabc58f8cd1c8db0bfdac19198b240f9421965683ed2c1532e1a6\": container with ID starting with c8ccec5bf5ecabc58f8cd1c8db0bfdac19198b240f9421965683ed2c1532e1a6 not found: ID does not exist" containerID="c8ccec5bf5ecabc58f8cd1c8db0bfdac19198b240f9421965683ed2c1532e1a6" Mar 13 10:58:47.366198 master-0 kubenswrapper[33013]: I0313 10:58:47.366168 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ccec5bf5ecabc58f8cd1c8db0bfdac19198b240f9421965683ed2c1532e1a6"} err="failed to get container status \"c8ccec5bf5ecabc58f8cd1c8db0bfdac19198b240f9421965683ed2c1532e1a6\": rpc error: code = NotFound desc = could not find container \"c8ccec5bf5ecabc58f8cd1c8db0bfdac19198b240f9421965683ed2c1532e1a6\": container with ID starting with c8ccec5bf5ecabc58f8cd1c8db0bfdac19198b240f9421965683ed2c1532e1a6 not found: ID does not exist" Mar 13 10:58:47.380721 master-0 kubenswrapper[33013]: I0313 10:58:47.380653 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7f94bd6c98-f4rh5" podStartSLOduration=42.380632092 podStartE2EDuration="42.380632092s" podCreationTimestamp="2026-03-13 10:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:58:47.374894772 +0000 UTC m=+110.850848121" watchObservedRunningTime="2026-03-13 10:58:47.380632092 +0000 UTC m=+110.856585441" Mar 13 10:58:47.391450 master-0 kubenswrapper[33013]: I0313 10:58:47.391388 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-b94667db7-z29mk"] Mar 13 10:58:47.398208 master-0 kubenswrapper[33013]: I0313 10:58:47.398143 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-b94667db7-z29mk"] Mar 13 10:58:47.997887 master-0 kubenswrapper[33013]: I0313 10:58:47.997823 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx"] Mar 13 10:58:47.998491 master-0 kubenswrapper[33013]: E0313 10:58:47.998258 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" containerName="console" Mar 13 10:58:47.998491 master-0 kubenswrapper[33013]: I0313 10:58:47.998284 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" containerName="console" Mar 13 10:58:47.998568 master-0 kubenswrapper[33013]: I0313 10:58:47.998554 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" containerName="console" Mar 13 10:58:47.999337 master-0 kubenswrapper[33013]: I0313 10:58:47.999310 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx" Mar 13 10:58:48.002048 master-0 kubenswrapper[33013]: I0313 10:58:48.002020 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 13 10:58:48.002292 master-0 kubenswrapper[33013]: I0313 10:58:48.002240 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 13 10:58:48.011763 master-0 kubenswrapper[33013]: I0313 10:58:48.011637 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx"] Mar 13 10:58:48.014897 master-0 kubenswrapper[33013]: I0313 10:58:48.014846 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cbd14087-90e0-4218-858b-940d5e576ac6-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-nvvzx\" (UID: \"cbd14087-90e0-4218-858b-940d5e576ac6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx" Mar 13 10:58:48.015021 master-0 kubenswrapper[33013]: I0313 10:58:48.014930 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/cbd14087-90e0-4218-858b-940d5e576ac6-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-nvvzx\" (UID: \"cbd14087-90e0-4218-858b-940d5e576ac6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx" Mar 13 10:58:48.118491 master-0 kubenswrapper[33013]: I0313 10:58:48.116083 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cbd14087-90e0-4218-858b-940d5e576ac6-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-nvvzx\" (UID: \"cbd14087-90e0-4218-858b-940d5e576ac6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx" Mar 13 10:58:48.118491 master-0 kubenswrapper[33013]: I0313 10:58:48.116182 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/cbd14087-90e0-4218-858b-940d5e576ac6-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-nvvzx\" (UID: \"cbd14087-90e0-4218-858b-940d5e576ac6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx" Mar 13 10:58:48.118491 master-0 kubenswrapper[33013]: E0313 10:58:48.116352 33013 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 13 10:58:48.118491 master-0 kubenswrapper[33013]: E0313 10:58:48.116427 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd14087-90e0-4218-858b-940d5e576ac6-networking-console-plugin-cert podName:cbd14087-90e0-4218-858b-940d5e576ac6 nodeName:}" failed. No retries permitted until 2026-03-13 10:58:48.616407602 +0000 UTC m=+112.092360951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/cbd14087-90e0-4218-858b-940d5e576ac6-networking-console-plugin-cert") pod "networking-console-plugin-5cbd49d755-nvvzx" (UID: "cbd14087-90e0-4218-858b-940d5e576ac6") : secret "networking-console-plugin-cert" not found Mar 13 10:58:48.118491 master-0 kubenswrapper[33013]: I0313 10:58:48.117762 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cbd14087-90e0-4218-858b-940d5e576ac6-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-nvvzx\" (UID: \"cbd14087-90e0-4218-858b-940d5e576ac6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx" Mar 13 10:58:48.626626 master-0 kubenswrapper[33013]: I0313 10:58:48.626546 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/cbd14087-90e0-4218-858b-940d5e576ac6-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-nvvzx\" (UID: \"cbd14087-90e0-4218-858b-940d5e576ac6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx" Mar 13 10:58:48.632329 master-0 kubenswrapper[33013]: I0313 10:58:48.632281 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/cbd14087-90e0-4218-858b-940d5e576ac6-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-nvvzx\" (UID: \"cbd14087-90e0-4218-858b-940d5e576ac6\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx" Mar 13 10:58:48.722989 master-0 kubenswrapper[33013]: I0313 10:58:48.722421 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0" path="/var/lib/kubelet/pods/6e0d3cbe-3565-4a46-9302-f6b5d17bf6f0/volumes" Mar 13 10:58:48.919622 master-0 kubenswrapper[33013]: I0313 10:58:48.919417 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx" Mar 13 10:58:49.063313 master-0 kubenswrapper[33013]: I0313 10:58:49.062524 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-66b864759f-6clbz"] Mar 13 10:58:49.105488 master-0 kubenswrapper[33013]: I0313 10:58:49.105385 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76bbbbbcd4-rgrm6"] Mar 13 10:58:49.107587 master-0 kubenswrapper[33013]: I0313 10:58:49.106529 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.131678 master-0 kubenswrapper[33013]: I0313 10:58:49.128695 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76bbbbbcd4-rgrm6"] Mar 13 10:58:49.141634 master-0 kubenswrapper[33013]: I0313 10:58:49.140840 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-config\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.141634 master-0 kubenswrapper[33013]: I0313 10:58:49.141023 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-serving-cert\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.141634 master-0 kubenswrapper[33013]: I0313 10:58:49.141053 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-oauth-serving-cert\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.141634 master-0 kubenswrapper[33013]: I0313 10:58:49.141087 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-service-ca\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.141634 master-0 kubenswrapper[33013]: I0313 10:58:49.141119 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks5mg\" (UniqueName: \"kubernetes.io/projected/496bd468-60d6-40a1-a2ba-682e3f95a36a-kube-api-access-ks5mg\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.141634 master-0 kubenswrapper[33013]: I0313 10:58:49.141175 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-oauth-config\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.141634 master-0 kubenswrapper[33013]: I0313 10:58:49.141212 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-trusted-ca-bundle\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.249754 master-0 kubenswrapper[33013]: I0313 10:58:49.249562 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-service-ca\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.249754 master-0 kubenswrapper[33013]: I0313 10:58:49.249710 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks5mg\" (UniqueName: \"kubernetes.io/projected/496bd468-60d6-40a1-a2ba-682e3f95a36a-kube-api-access-ks5mg\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.250022 master-0 kubenswrapper[33013]: I0313 10:58:49.249800 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-oauth-config\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.250022 master-0 kubenswrapper[33013]: I0313 10:58:49.250022 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-trusted-ca-bundle\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.250115 master-0 kubenswrapper[33013]: I0313 10:58:49.250070 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-config\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.250468 master-0 kubenswrapper[33013]: I0313 10:58:49.250436 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-serving-cert\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.250534 master-0 kubenswrapper[33013]: I0313 10:58:49.250476 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-oauth-serving-cert\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.251889 master-0 kubenswrapper[33013]: I0313 10:58:49.251784 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-oauth-serving-cert\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.251960 master-0 kubenswrapper[33013]: I0313 10:58:49.251821 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-trusted-ca-bundle\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.251960 master-0 kubenswrapper[33013]: I0313 10:58:49.251821 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-config\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.252295 master-0 kubenswrapper[33013]: I0313 10:58:49.252254 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-service-ca\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.254667 master-0 kubenswrapper[33013]: I0313 10:58:49.254620 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-oauth-config\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.255706 master-0 kubenswrapper[33013]: I0313 10:58:49.255389 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-serving-cert\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.265698 master-0 kubenswrapper[33013]: I0313 10:58:49.265361 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks5mg\" (UniqueName: \"kubernetes.io/projected/496bd468-60d6-40a1-a2ba-682e3f95a36a-kube-api-access-ks5mg\") pod \"console-76bbbbbcd4-rgrm6\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.397106 master-0 kubenswrapper[33013]: I0313 10:58:49.397029 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx"] Mar 13 10:58:49.404157 master-0 kubenswrapper[33013]: W0313 10:58:49.404110 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbd14087_90e0_4218_858b_940d5e576ac6.slice/crio-2f2681e74af316a839ad49f950ae444725c7dd617f54ca4a8acda9734eb79fad WatchSource:0}: Error finding container 2f2681e74af316a839ad49f950ae444725c7dd617f54ca4a8acda9734eb79fad: Status 404 returned error can't find the container with id 2f2681e74af316a839ad49f950ae444725c7dd617f54ca4a8acda9734eb79fad Mar 13 10:58:49.461208 master-0 kubenswrapper[33013]: I0313 10:58:49.461151 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:49.871388 master-0 kubenswrapper[33013]: I0313 10:58:49.871334 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76bbbbbcd4-rgrm6"] Mar 13 10:58:49.875992 master-0 kubenswrapper[33013]: W0313 10:58:49.875923 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod496bd468_60d6_40a1_a2ba_682e3f95a36a.slice/crio-04c63e8c774408ba0b4e1db70a6836fbfbf2d6870623d9f8ce48872f54215040 WatchSource:0}: Error finding container 04c63e8c774408ba0b4e1db70a6836fbfbf2d6870623d9f8ce48872f54215040: Status 404 returned error can't find the container with id 04c63e8c774408ba0b4e1db70a6836fbfbf2d6870623d9f8ce48872f54215040 Mar 13 10:58:50.374876 master-0 kubenswrapper[33013]: I0313 10:58:50.374806 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx" event={"ID":"cbd14087-90e0-4218-858b-940d5e576ac6","Type":"ContainerStarted","Data":"2f2681e74af316a839ad49f950ae444725c7dd617f54ca4a8acda9734eb79fad"} Mar 13 10:58:50.378202 master-0 kubenswrapper[33013]: I0313 10:58:50.378168 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76bbbbbcd4-rgrm6" event={"ID":"496bd468-60d6-40a1-a2ba-682e3f95a36a","Type":"ContainerStarted","Data":"90fefd75f56592057f0c06f82f4c8e3d37a50635dd811329b11289fe0259e993"} Mar 13 10:58:50.378336 master-0 kubenswrapper[33013]: I0313 10:58:50.378206 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76bbbbbcd4-rgrm6" event={"ID":"496bd468-60d6-40a1-a2ba-682e3f95a36a","Type":"ContainerStarted","Data":"04c63e8c774408ba0b4e1db70a6836fbfbf2d6870623d9f8ce48872f54215040"} Mar 13 10:58:50.404141 master-0 kubenswrapper[33013]: I0313 10:58:50.403851 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76bbbbbcd4-rgrm6" podStartSLOduration=1.403832009 podStartE2EDuration="1.403832009s" podCreationTimestamp="2026-03-13 10:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 10:58:50.401161584 +0000 UTC m=+113.877114933" watchObservedRunningTime="2026-03-13 10:58:50.403832009 +0000 UTC m=+113.879785358" Mar 13 10:58:51.387070 master-0 kubenswrapper[33013]: I0313 10:58:51.387016 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx" event={"ID":"cbd14087-90e0-4218-858b-940d5e576ac6","Type":"ContainerStarted","Data":"53ec6bd0ff7069f9a3f98e4bdc1599c22ea68e0ab1b17cebbcd6cc263fca844a"} Mar 13 10:58:51.406186 master-0 kubenswrapper[33013]: I0313 10:58:51.406117 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5cbd49d755-nvvzx" podStartSLOduration=2.9927002 podStartE2EDuration="4.406097874s" podCreationTimestamp="2026-03-13 10:58:47 +0000 UTC" firstStartedPulling="2026-03-13 10:58:49.407471508 +0000 UTC m=+112.883424857" lastFinishedPulling="2026-03-13 10:58:50.820869182 +0000 UTC m=+114.296822531" observedRunningTime="2026-03-13 10:58:51.404215302 +0000 UTC m=+114.880168641" watchObservedRunningTime="2026-03-13 10:58:51.406097874 +0000 UTC m=+114.882051223" Mar 13 10:58:51.708790 master-0 kubenswrapper[33013]: I0313 10:58:51.708636 33013 patch_prober.go:28] interesting pod/console-dc44494b5-hphsz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 10:58:51.708790 master-0 kubenswrapper[33013]: I0313 10:58:51.708706 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dc44494b5-hphsz" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 10:58:59.461762 master-0 kubenswrapper[33013]: I0313 10:58:59.461666 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:59.461762 master-0 kubenswrapper[33013]: I0313 10:58:59.461748 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 10:58:59.462884 master-0 kubenswrapper[33013]: I0313 10:58:59.462788 33013 patch_prober.go:28] interesting pod/console-76bbbbbcd4-rgrm6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 13 10:58:59.462884 master-0 kubenswrapper[33013]: I0313 10:58:59.462846 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76bbbbbcd4-rgrm6" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 13 10:59:01.708935 master-0 kubenswrapper[33013]: I0313 10:59:01.708687 33013 patch_prober.go:28] interesting pod/console-dc44494b5-hphsz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 10:59:01.708935 master-0 kubenswrapper[33013]: I0313 10:59:01.708809 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dc44494b5-hphsz" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 10:59:02.099518 master-0 kubenswrapper[33013]: E0313 10:59:02.099432 33013 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod82b14909_f429_41f4_aef4_771e17ecab97.slice/crio-conmon-fa30c0ad3b98a44649ad8702f99556d670ed09b926d3ba1835f198c6d54d30aa.scope\": RecentStats: unable to find data in memory cache]" Mar 13 10:59:02.400787 master-0 kubenswrapper[33013]: I0313 10:59:02.400730 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_82b14909-f429-41f4-aef4-771e17ecab97/installer/0.log" Mar 13 10:59:02.400787 master-0 kubenswrapper[33013]: I0313 10:59:02.400796 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 10:59:02.460735 master-0 kubenswrapper[33013]: I0313 10:59:02.460669 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_82b14909-f429-41f4-aef4-771e17ecab97/installer/0.log" Mar 13 10:59:02.461126 master-0 kubenswrapper[33013]: I0313 10:59:02.460749 33013 generic.go:334] "Generic (PLEG): container finished" podID="82b14909-f429-41f4-aef4-771e17ecab97" containerID="fa30c0ad3b98a44649ad8702f99556d670ed09b926d3ba1835f198c6d54d30aa" exitCode=1 Mar 13 10:59:02.461126 master-0 kubenswrapper[33013]: I0313 10:59:02.460796 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"82b14909-f429-41f4-aef4-771e17ecab97","Type":"ContainerDied","Data":"fa30c0ad3b98a44649ad8702f99556d670ed09b926d3ba1835f198c6d54d30aa"} Mar 13 10:59:02.461126 master-0 kubenswrapper[33013]: I0313 10:59:02.460826 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 10:59:02.461126 master-0 kubenswrapper[33013]: I0313 10:59:02.460847 33013 scope.go:117] "RemoveContainer" containerID="fa30c0ad3b98a44649ad8702f99556d670ed09b926d3ba1835f198c6d54d30aa" Mar 13 10:59:02.461126 master-0 kubenswrapper[33013]: I0313 10:59:02.460832 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"82b14909-f429-41f4-aef4-771e17ecab97","Type":"ContainerDied","Data":"b361a1850d0d10337a4bbff3b2a2cc59282548db5f49500a3659408f3dc04943"} Mar 13 10:59:02.473915 master-0 kubenswrapper[33013]: I0313 10:59:02.467751 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82b14909-f429-41f4-aef4-771e17ecab97-var-lock\") pod \"82b14909-f429-41f4-aef4-771e17ecab97\" (UID: \"82b14909-f429-41f4-aef4-771e17ecab97\") " Mar 13 10:59:02.473915 master-0 kubenswrapper[33013]: I0313 10:59:02.467927 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82b14909-f429-41f4-aef4-771e17ecab97-kubelet-dir\") pod \"82b14909-f429-41f4-aef4-771e17ecab97\" (UID: \"82b14909-f429-41f4-aef4-771e17ecab97\") " Mar 13 10:59:02.473915 master-0 kubenswrapper[33013]: I0313 10:59:02.467989 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82b14909-f429-41f4-aef4-771e17ecab97-kube-api-access\") pod \"82b14909-f429-41f4-aef4-771e17ecab97\" (UID: \"82b14909-f429-41f4-aef4-771e17ecab97\") " Mar 13 10:59:02.473915 master-0 kubenswrapper[33013]: I0313 10:59:02.468160 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82b14909-f429-41f4-aef4-771e17ecab97-var-lock" (OuterVolumeSpecName: "var-lock") pod "82b14909-f429-41f4-aef4-771e17ecab97" (UID: "82b14909-f429-41f4-aef4-771e17ecab97"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:59:02.473915 master-0 kubenswrapper[33013]: I0313 10:59:02.468204 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82b14909-f429-41f4-aef4-771e17ecab97-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "82b14909-f429-41f4-aef4-771e17ecab97" (UID: "82b14909-f429-41f4-aef4-771e17ecab97"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:59:02.473915 master-0 kubenswrapper[33013]: I0313 10:59:02.468368 33013 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/82b14909-f429-41f4-aef4-771e17ecab97-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:02.473915 master-0 kubenswrapper[33013]: I0313 10:59:02.468386 33013 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82b14909-f429-41f4-aef4-771e17ecab97-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:02.473915 master-0 kubenswrapper[33013]: I0313 10:59:02.471364 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b14909-f429-41f4-aef4-771e17ecab97-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "82b14909-f429-41f4-aef4-771e17ecab97" (UID: "82b14909-f429-41f4-aef4-771e17ecab97"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:59:02.503732 master-0 kubenswrapper[33013]: I0313 10:59:02.503697 33013 scope.go:117] "RemoveContainer" containerID="fa30c0ad3b98a44649ad8702f99556d670ed09b926d3ba1835f198c6d54d30aa" Mar 13 10:59:02.504200 master-0 kubenswrapper[33013]: E0313 10:59:02.504151 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa30c0ad3b98a44649ad8702f99556d670ed09b926d3ba1835f198c6d54d30aa\": container with ID starting with fa30c0ad3b98a44649ad8702f99556d670ed09b926d3ba1835f198c6d54d30aa not found: ID does not exist" containerID="fa30c0ad3b98a44649ad8702f99556d670ed09b926d3ba1835f198c6d54d30aa" Mar 13 10:59:02.504250 master-0 kubenswrapper[33013]: I0313 10:59:02.504218 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa30c0ad3b98a44649ad8702f99556d670ed09b926d3ba1835f198c6d54d30aa"} err="failed to get container status \"fa30c0ad3b98a44649ad8702f99556d670ed09b926d3ba1835f198c6d54d30aa\": rpc error: code = NotFound desc = could not find container \"fa30c0ad3b98a44649ad8702f99556d670ed09b926d3ba1835f198c6d54d30aa\": container with ID starting with fa30c0ad3b98a44649ad8702f99556d670ed09b926d3ba1835f198c6d54d30aa not found: ID does not exist" Mar 13 10:59:02.570175 master-0 kubenswrapper[33013]: I0313 10:59:02.570081 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82b14909-f429-41f4-aef4-771e17ecab97-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:02.782026 master-0 kubenswrapper[33013]: I0313 10:59:02.781942 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 10:59:02.794831 master-0 kubenswrapper[33013]: I0313 10:59:02.794760 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 10:59:04.719993 master-0 kubenswrapper[33013]: I0313 10:59:04.719915 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82b14909-f429-41f4-aef4-771e17ecab97" path="/var/lib/kubelet/pods/82b14909-f429-41f4-aef4-771e17ecab97/volumes" Mar 13 10:59:09.462232 master-0 kubenswrapper[33013]: I0313 10:59:09.462145 33013 patch_prober.go:28] interesting pod/console-76bbbbbcd4-rgrm6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 13 10:59:09.462232 master-0 kubenswrapper[33013]: I0313 10:59:09.462221 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76bbbbbcd4-rgrm6" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 13 10:59:11.708612 master-0 kubenswrapper[33013]: I0313 10:59:11.708368 33013 patch_prober.go:28] interesting pod/console-dc44494b5-hphsz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 10:59:11.708612 master-0 kubenswrapper[33013]: I0313 10:59:11.708438 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dc44494b5-hphsz" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 10:59:14.119980 master-0 kubenswrapper[33013]: I0313 10:59:14.119865 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-66b864759f-6clbz" podUID="aca8f47b-7610-492c-bf79-a7e598b07054" containerName="console" containerID="cri-o://ae5b6ea2a145fdf7f9d35ebde17a54fa1f5cfec8a22e10004fdfdce453640d37" gracePeriod=15 Mar 13 10:59:14.556050 master-0 kubenswrapper[33013]: I0313 10:59:14.556008 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66b864759f-6clbz_aca8f47b-7610-492c-bf79-a7e598b07054/console/0.log" Mar 13 10:59:14.556155 master-0 kubenswrapper[33013]: I0313 10:59:14.556055 33013 generic.go:334] "Generic (PLEG): container finished" podID="aca8f47b-7610-492c-bf79-a7e598b07054" containerID="ae5b6ea2a145fdf7f9d35ebde17a54fa1f5cfec8a22e10004fdfdce453640d37" exitCode=2 Mar 13 10:59:14.556155 master-0 kubenswrapper[33013]: I0313 10:59:14.556091 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66b864759f-6clbz" event={"ID":"aca8f47b-7610-492c-bf79-a7e598b07054","Type":"ContainerDied","Data":"ae5b6ea2a145fdf7f9d35ebde17a54fa1f5cfec8a22e10004fdfdce453640d37"} Mar 13 10:59:14.599910 master-0 kubenswrapper[33013]: I0313 10:59:14.599855 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66b864759f-6clbz_aca8f47b-7610-492c-bf79-a7e598b07054/console/0.log" Mar 13 10:59:14.600155 master-0 kubenswrapper[33013]: I0313 10:59:14.599934 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:59:14.646421 master-0 kubenswrapper[33013]: I0313 10:59:14.646360 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aca8f47b-7610-492c-bf79-a7e598b07054-console-serving-cert\") pod \"aca8f47b-7610-492c-bf79-a7e598b07054\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " Mar 13 10:59:14.646735 master-0 kubenswrapper[33013]: I0313 10:59:14.646522 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-oauth-serving-cert\") pod \"aca8f47b-7610-492c-bf79-a7e598b07054\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " Mar 13 10:59:14.646802 master-0 kubenswrapper[33013]: I0313 10:59:14.646751 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-trusted-ca-bundle\") pod \"aca8f47b-7610-492c-bf79-a7e598b07054\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " Mar 13 10:59:14.646849 master-0 kubenswrapper[33013]: I0313 10:59:14.646835 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-console-config\") pod \"aca8f47b-7610-492c-bf79-a7e598b07054\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " Mar 13 10:59:14.646949 master-0 kubenswrapper[33013]: I0313 10:59:14.646923 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26zj2\" (UniqueName: \"kubernetes.io/projected/aca8f47b-7610-492c-bf79-a7e598b07054-kube-api-access-26zj2\") pod \"aca8f47b-7610-492c-bf79-a7e598b07054\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " Mar 13 10:59:14.647557 master-0 kubenswrapper[33013]: I0313 10:59:14.647086 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "aca8f47b-7610-492c-bf79-a7e598b07054" (UID: "aca8f47b-7610-492c-bf79-a7e598b07054"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:59:14.647557 master-0 kubenswrapper[33013]: I0313 10:59:14.647440 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "aca8f47b-7610-492c-bf79-a7e598b07054" (UID: "aca8f47b-7610-492c-bf79-a7e598b07054"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:59:14.648225 master-0 kubenswrapper[33013]: I0313 10:59:14.648183 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-console-config" (OuterVolumeSpecName: "console-config") pod "aca8f47b-7610-492c-bf79-a7e598b07054" (UID: "aca8f47b-7610-492c-bf79-a7e598b07054"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:59:14.648352 master-0 kubenswrapper[33013]: I0313 10:59:14.648318 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aca8f47b-7610-492c-bf79-a7e598b07054-console-oauth-config\") pod \"aca8f47b-7610-492c-bf79-a7e598b07054\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " Mar 13 10:59:14.648405 master-0 kubenswrapper[33013]: I0313 10:59:14.648359 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-service-ca\") pod \"aca8f47b-7610-492c-bf79-a7e598b07054\" (UID: \"aca8f47b-7610-492c-bf79-a7e598b07054\") " Mar 13 10:59:14.648973 master-0 kubenswrapper[33013]: I0313 10:59:14.648935 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-service-ca" (OuterVolumeSpecName: "service-ca") pod "aca8f47b-7610-492c-bf79-a7e598b07054" (UID: "aca8f47b-7610-492c-bf79-a7e598b07054"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 10:59:14.649756 master-0 kubenswrapper[33013]: I0313 10:59:14.649705 33013 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:14.649836 master-0 kubenswrapper[33013]: I0313 10:59:14.649774 33013 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:14.649836 master-0 kubenswrapper[33013]: I0313 10:59:14.649789 33013 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:14.649836 master-0 kubenswrapper[33013]: I0313 10:59:14.649803 33013 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aca8f47b-7610-492c-bf79-a7e598b07054-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:14.655103 master-0 kubenswrapper[33013]: I0313 10:59:14.655067 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8f47b-7610-492c-bf79-a7e598b07054-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "aca8f47b-7610-492c-bf79-a7e598b07054" (UID: "aca8f47b-7610-492c-bf79-a7e598b07054"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:59:14.655243 master-0 kubenswrapper[33013]: I0313 10:59:14.655201 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8f47b-7610-492c-bf79-a7e598b07054-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "aca8f47b-7610-492c-bf79-a7e598b07054" (UID: "aca8f47b-7610-492c-bf79-a7e598b07054"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 10:59:14.655332 master-0 kubenswrapper[33013]: I0313 10:59:14.655101 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca8f47b-7610-492c-bf79-a7e598b07054-kube-api-access-26zj2" (OuterVolumeSpecName: "kube-api-access-26zj2") pod "aca8f47b-7610-492c-bf79-a7e598b07054" (UID: "aca8f47b-7610-492c-bf79-a7e598b07054"). InnerVolumeSpecName "kube-api-access-26zj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:59:14.751623 master-0 kubenswrapper[33013]: I0313 10:59:14.751550 33013 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aca8f47b-7610-492c-bf79-a7e598b07054-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:14.751623 master-0 kubenswrapper[33013]: I0313 10:59:14.751624 33013 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aca8f47b-7610-492c-bf79-a7e598b07054-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:14.751623 master-0 kubenswrapper[33013]: I0313 10:59:14.751639 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26zj2\" (UniqueName: \"kubernetes.io/projected/aca8f47b-7610-492c-bf79-a7e598b07054-kube-api-access-26zj2\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:15.565193 master-0 kubenswrapper[33013]: I0313 10:59:15.565119 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66b864759f-6clbz_aca8f47b-7610-492c-bf79-a7e598b07054/console/0.log" Mar 13 10:59:15.566077 master-0 kubenswrapper[33013]: I0313 10:59:15.565199 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66b864759f-6clbz" event={"ID":"aca8f47b-7610-492c-bf79-a7e598b07054","Type":"ContainerDied","Data":"7cf27f8dff55faab5d8c8aff3b41971d21ec5d40698e28b5068e78833a54882a"} Mar 13 10:59:15.566077 master-0 kubenswrapper[33013]: I0313 10:59:15.565251 33013 scope.go:117] "RemoveContainer" containerID="ae5b6ea2a145fdf7f9d35ebde17a54fa1f5cfec8a22e10004fdfdce453640d37" Mar 13 10:59:15.566077 master-0 kubenswrapper[33013]: I0313 10:59:15.565425 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66b864759f-6clbz" Mar 13 10:59:15.588951 master-0 kubenswrapper[33013]: I0313 10:59:15.588900 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-66b864759f-6clbz"] Mar 13 10:59:15.591506 master-0 kubenswrapper[33013]: I0313 10:59:15.591454 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-66b864759f-6clbz"] Mar 13 10:59:16.722851 master-0 kubenswrapper[33013]: I0313 10:59:16.722794 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca8f47b-7610-492c-bf79-a7e598b07054" path="/var/lib/kubelet/pods/aca8f47b-7610-492c-bf79-a7e598b07054/volumes" Mar 13 10:59:19.462250 master-0 kubenswrapper[33013]: I0313 10:59:19.462118 33013 patch_prober.go:28] interesting pod/console-76bbbbbcd4-rgrm6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 13 10:59:19.463206 master-0 kubenswrapper[33013]: I0313 10:59:19.462329 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76bbbbbcd4-rgrm6" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 13 10:59:21.708314 master-0 kubenswrapper[33013]: I0313 10:59:21.708243 33013 patch_prober.go:28] interesting pod/console-dc44494b5-hphsz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 10:59:21.709082 master-0 kubenswrapper[33013]: I0313 10:59:21.708316 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dc44494b5-hphsz" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 10:59:29.461956 master-0 kubenswrapper[33013]: I0313 10:59:29.461841 33013 patch_prober.go:28] interesting pod/console-76bbbbbcd4-rgrm6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 13 10:59:29.461956 master-0 kubenswrapper[33013]: I0313 10:59:29.461937 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76bbbbbcd4-rgrm6" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 13 10:59:31.708653 master-0 kubenswrapper[33013]: I0313 10:59:31.708568 33013 patch_prober.go:28] interesting pod/console-dc44494b5-hphsz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 10:59:31.709256 master-0 kubenswrapper[33013]: I0313 10:59:31.708657 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dc44494b5-hphsz" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 10:59:39.462744 master-0 kubenswrapper[33013]: I0313 10:59:39.462653 33013 patch_prober.go:28] interesting pod/console-76bbbbbcd4-rgrm6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 13 10:59:39.463476 master-0 kubenswrapper[33013]: I0313 10:59:39.462759 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76bbbbbcd4-rgrm6" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 13 10:59:41.708876 master-0 kubenswrapper[33013]: I0313 10:59:41.708797 33013 patch_prober.go:28] interesting pod/console-dc44494b5-hphsz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 10:59:41.709539 master-0 kubenswrapper[33013]: I0313 10:59:41.708967 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dc44494b5-hphsz" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 10:59:43.072883 master-0 kubenswrapper[33013]: I0313 10:59:43.072803 33013 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 10:59:43.073536 master-0 kubenswrapper[33013]: E0313 10:59:43.073333 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b14909-f429-41f4-aef4-771e17ecab97" containerName="installer" Mar 13 10:59:43.073536 master-0 kubenswrapper[33013]: I0313 10:59:43.073359 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b14909-f429-41f4-aef4-771e17ecab97" containerName="installer" Mar 13 10:59:43.073536 master-0 kubenswrapper[33013]: E0313 10:59:43.073392 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca8f47b-7610-492c-bf79-a7e598b07054" containerName="console" Mar 13 10:59:43.073536 master-0 kubenswrapper[33013]: I0313 10:59:43.073405 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca8f47b-7610-492c-bf79-a7e598b07054" containerName="console" Mar 13 10:59:43.073688 master-0 kubenswrapper[33013]: I0313 10:59:43.073609 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b14909-f429-41f4-aef4-771e17ecab97" containerName="installer" Mar 13 10:59:43.073688 master-0 kubenswrapper[33013]: I0313 10:59:43.073650 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca8f47b-7610-492c-bf79-a7e598b07054" containerName="console" Mar 13 10:59:43.074206 master-0 kubenswrapper[33013]: I0313 10:59:43.074169 33013 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 10:59:43.074521 master-0 kubenswrapper[33013]: I0313 10:59:43.074449 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" containerID="cri-o://78a5083f0f4488ca7c6e4d90cf72bc643a68b4410b27b3743964b73f858c2984" gracePeriod=15 Mar 13 10:59:43.074781 master-0 kubenswrapper[33013]: I0313 10:59:43.074673 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.075192 master-0 kubenswrapper[33013]: I0313 10:59:43.075162 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" containerID="cri-o://34c71f954534cca434c7c802f7855a3b7861fd19181e83ce6a5c7e4eadd5d1b3" gracePeriod=15 Mar 13 10:59:43.075254 master-0 kubenswrapper[33013]: I0313 10:59:43.075237 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ba51f5c967029e2b068021d1a83ee4f598eaadf9c3a2516df68908ac239d7445" gracePeriod=15 Mar 13 10:59:43.075315 master-0 kubenswrapper[33013]: I0313 10:59:43.075255 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" containerID="cri-o://1ecea7b8d5e69133ad13b5b777fbc920ab41ff3583523c6b2276f6193ca1bf07" gracePeriod=15 Mar 13 10:59:43.075372 master-0 kubenswrapper[33013]: I0313 10:59:43.075350 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://87ad10810594d3a8b47e2d2f0ea99d2d22bd3431702cd859d74cd9c630e59378" gracePeriod=15 Mar 13 10:59:43.075557 master-0 kubenswrapper[33013]: I0313 10:59:43.075485 33013 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: E0313 10:59:43.075710 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: I0313 10:59:43.075725 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: E0313 10:59:43.075744 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: I0313 10:59:43.075751 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: E0313 10:59:43.075767 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: I0313 10:59:43.075775 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: E0313 10:59:43.075794 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="setup" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: I0313 10:59:43.075802 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="setup" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: E0313 10:59:43.075816 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: I0313 10:59:43.075824 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: E0313 10:59:43.075839 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: I0313 10:59:43.075846 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: I0313 10:59:43.076067 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: I0313 10:59:43.076092 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="setup" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: I0313 10:59:43.076102 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: I0313 10:59:43.076115 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: I0313 10:59:43.076135 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 13 10:59:43.076303 master-0 kubenswrapper[33013]: I0313 10:59:43.076157 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 13 10:59:43.081535 master-0 kubenswrapper[33013]: I0313 10:59:43.081412 33013 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="077dd10388b9e3e48a07382126e86621" podUID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" Mar 13 10:59:43.175137 master-0 kubenswrapper[33013]: E0313 10:59:43.175078 33013 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.190616 master-0 kubenswrapper[33013]: I0313 10:59:43.190527 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:43.190616 master-0 kubenswrapper[33013]: I0313 10:59:43.190612 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.190915 master-0 kubenswrapper[33013]: I0313 10:59:43.190661 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:43.190915 master-0 kubenswrapper[33013]: I0313 10:59:43.190699 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.190915 master-0 kubenswrapper[33013]: I0313 10:59:43.190842 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.190915 master-0 kubenswrapper[33013]: I0313 10:59:43.190865 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:43.190915 master-0 kubenswrapper[33013]: I0313 10:59:43.190895 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.191173 master-0 kubenswrapper[33013]: I0313 10:59:43.190925 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.292690 master-0 kubenswrapper[33013]: I0313 10:59:43.292650 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.292804 master-0 kubenswrapper[33013]: I0313 10:59:43.292740 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.292860 master-0 kubenswrapper[33013]: I0313 10:59:43.292823 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.292946 master-0 kubenswrapper[33013]: I0313 10:59:43.292910 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:43.293087 master-0 kubenswrapper[33013]: I0313 10:59:43.292992 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.293087 master-0 kubenswrapper[33013]: I0313 10:59:43.292995 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.293087 master-0 kubenswrapper[33013]: I0313 10:59:43.293025 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.293087 master-0 kubenswrapper[33013]: I0313 10:59:43.293037 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.293087 master-0 kubenswrapper[33013]: I0313 10:59:43.293045 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:43.293087 master-0 kubenswrapper[33013]: I0313 10:59:43.293074 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.293307 master-0 kubenswrapper[33013]: I0313 10:59:43.293204 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:43.293307 master-0 kubenswrapper[33013]: I0313 10:59:43.293240 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.293307 master-0 kubenswrapper[33013]: I0313 10:59:43.293289 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:43.293413 master-0 kubenswrapper[33013]: I0313 10:59:43.293375 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:43.293413 master-0 kubenswrapper[33013]: I0313 10:59:43.293399 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:43.293479 master-0 kubenswrapper[33013]: I0313 10:59:43.293422 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.476890 master-0 kubenswrapper[33013]: I0313 10:59:43.476719 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.496831 master-0 kubenswrapper[33013]: W0313 10:59:43.496785 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb275ed7e9ce09d69a66613ca3ae3d89e.slice/crio-cc4e45fb571fd7066b4d23391630e4d3e90c8905ca3c0ee5073f8d68db782ed8 WatchSource:0}: Error finding container cc4e45fb571fd7066b4d23391630e4d3e90c8905ca3c0ee5073f8d68db782ed8: Status 404 returned error can't find the container with id cc4e45fb571fd7066b4d23391630e4d3e90c8905ca3c0ee5073f8d68db782ed8 Mar 13 10:59:43.500136 master-0 kubenswrapper[33013]: E0313 10:59:43.500001 33013 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c61867b4886fd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:b275ed7e9ce09d69a66613ca3ae3d89e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:59:43.498995453 +0000 UTC m=+166.974948802,LastTimestamp:2026-03-13 10:59:43.498995453 +0000 UTC m=+166.974948802,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:59:43.781101 master-0 kubenswrapper[33013]: I0313 10:59:43.781037 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"b275ed7e9ce09d69a66613ca3ae3d89e","Type":"ContainerStarted","Data":"714ae15495778fcf056c5fdbee5044a806f2fb2c6cea9cfee5beae8f8c530b70"} Mar 13 10:59:43.781101 master-0 kubenswrapper[33013]: I0313 10:59:43.781087 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"b275ed7e9ce09d69a66613ca3ae3d89e","Type":"ContainerStarted","Data":"cc4e45fb571fd7066b4d23391630e4d3e90c8905ca3c0ee5073f8d68db782ed8"} Mar 13 10:59:43.782374 master-0 kubenswrapper[33013]: E0313 10:59:43.782340 33013 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 10:59:43.786141 master-0 kubenswrapper[33013]: I0313 10:59:43.786099 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/0.log" Mar 13 10:59:43.786847 master-0 kubenswrapper[33013]: I0313 10:59:43.786807 33013 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="34c71f954534cca434c7c802f7855a3b7861fd19181e83ce6a5c7e4eadd5d1b3" exitCode=0 Mar 13 10:59:43.786847 master-0 kubenswrapper[33013]: I0313 10:59:43.786837 33013 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="ba51f5c967029e2b068021d1a83ee4f598eaadf9c3a2516df68908ac239d7445" exitCode=0 Mar 13 10:59:43.786979 master-0 kubenswrapper[33013]: I0313 10:59:43.786850 33013 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="87ad10810594d3a8b47e2d2f0ea99d2d22bd3431702cd859d74cd9c630e59378" exitCode=0 Mar 13 10:59:43.786979 master-0 kubenswrapper[33013]: I0313 10:59:43.786861 33013 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="1ecea7b8d5e69133ad13b5b777fbc920ab41ff3583523c6b2276f6193ca1bf07" exitCode=2 Mar 13 10:59:43.788691 master-0 kubenswrapper[33013]: I0313 10:59:43.788661 33013 generic.go:334] "Generic (PLEG): container finished" podID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" containerID="e3ce065ad028c0344758cccfff61d17b7e1a4d76732973ffd4853cb73b5c4fc3" exitCode=0 Mar 13 10:59:43.788769 master-0 kubenswrapper[33013]: I0313 10:59:43.788701 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec","Type":"ContainerDied","Data":"e3ce065ad028c0344758cccfff61d17b7e1a4d76732973ffd4853cb73b5c4fc3"} Mar 13 10:59:43.789787 master-0 kubenswrapper[33013]: I0313 10:59:43.789748 33013 status_manager.go:851] "Failed to get status for pod" podUID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:45.212368 master-0 kubenswrapper[33013]: I0313 10:59:45.209978 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 10:59:45.217689 master-0 kubenswrapper[33013]: I0313 10:59:45.216738 33013 status_manager.go:851] "Failed to get status for pod" podUID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:45.322783 master-0 kubenswrapper[33013]: I0313 10:59:45.322712 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-var-lock\") pod \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\" (UID: \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\") " Mar 13 10:59:45.323143 master-0 kubenswrapper[33013]: I0313 10:59:45.322838 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-kube-api-access\") pod \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\" (UID: \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\") " Mar 13 10:59:45.323143 master-0 kubenswrapper[33013]: I0313 10:59:45.322939 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-kubelet-dir\") pod \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\" (UID: \"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec\") " Mar 13 10:59:45.323415 master-0 kubenswrapper[33013]: I0313 10:59:45.323373 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" (UID: "cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:59:45.323463 master-0 kubenswrapper[33013]: I0313 10:59:45.323444 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-var-lock" (OuterVolumeSpecName: "var-lock") pod "cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" (UID: "cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:59:45.326226 master-0 kubenswrapper[33013]: I0313 10:59:45.326194 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" (UID: "cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 10:59:45.424868 master-0 kubenswrapper[33013]: I0313 10:59:45.424817 33013 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:45.424868 master-0 kubenswrapper[33013]: I0313 10:59:45.424852 33013 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:45.424868 master-0 kubenswrapper[33013]: I0313 10:59:45.424865 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:45.457824 master-0 kubenswrapper[33013]: I0313 10:59:45.457767 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/0.log" Mar 13 10:59:45.458636 master-0 kubenswrapper[33013]: I0313 10:59:45.458606 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:45.459715 master-0 kubenswrapper[33013]: I0313 10:59:45.459677 33013 status_manager.go:851] "Failed to get status for pod" podUID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:45.460251 master-0 kubenswrapper[33013]: I0313 10:59:45.460213 33013 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:45.525971 master-0 kubenswrapper[33013]: I0313 10:59:45.525908 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 13 10:59:45.526272 master-0 kubenswrapper[33013]: I0313 10:59:45.525989 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:59:45.526272 master-0 kubenswrapper[33013]: I0313 10:59:45.526078 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 13 10:59:45.526272 master-0 kubenswrapper[33013]: I0313 10:59:45.526165 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:59:45.526272 master-0 kubenswrapper[33013]: I0313 10:59:45.526232 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 13 10:59:45.526461 master-0 kubenswrapper[33013]: I0313 10:59:45.526294 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 10:59:45.526600 master-0 kubenswrapper[33013]: I0313 10:59:45.526555 33013 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:45.526600 master-0 kubenswrapper[33013]: I0313 10:59:45.526578 33013 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:45.526692 master-0 kubenswrapper[33013]: I0313 10:59:45.526610 33013 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 10:59:45.806631 master-0 kubenswrapper[33013]: I0313 10:59:45.806580 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/0.log" Mar 13 10:59:45.807339 master-0 kubenswrapper[33013]: I0313 10:59:45.807296 33013 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="78a5083f0f4488ca7c6e4d90cf72bc643a68b4410b27b3743964b73f858c2984" exitCode=0 Mar 13 10:59:45.807438 master-0 kubenswrapper[33013]: I0313 10:59:45.807397 33013 scope.go:117] "RemoveContainer" containerID="34c71f954534cca434c7c802f7855a3b7861fd19181e83ce6a5c7e4eadd5d1b3" Mar 13 10:59:45.807492 master-0 kubenswrapper[33013]: I0313 10:59:45.807457 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:45.809007 master-0 kubenswrapper[33013]: I0313 10:59:45.808920 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec","Type":"ContainerDied","Data":"e062ff9920fcd6c629d3cc5f823b82a40372812589bb78603cad4785b8ce5996"} Mar 13 10:59:45.809007 master-0 kubenswrapper[33013]: I0313 10:59:45.808958 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e062ff9920fcd6c629d3cc5f823b82a40372812589bb78603cad4785b8ce5996" Mar 13 10:59:45.809007 master-0 kubenswrapper[33013]: I0313 10:59:45.808988 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 13 10:59:45.825906 master-0 kubenswrapper[33013]: I0313 10:59:45.825871 33013 scope.go:117] "RemoveContainer" containerID="ba51f5c967029e2b068021d1a83ee4f598eaadf9c3a2516df68908ac239d7445" Mar 13 10:59:45.827451 master-0 kubenswrapper[33013]: I0313 10:59:45.827398 33013 status_manager.go:851] "Failed to get status for pod" podUID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:45.828210 master-0 kubenswrapper[33013]: I0313 10:59:45.828172 33013 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:45.836572 master-0 kubenswrapper[33013]: I0313 10:59:45.836526 33013 status_manager.go:851] "Failed to get status for pod" podUID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:45.837170 master-0 kubenswrapper[33013]: I0313 10:59:45.837129 33013 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:45.844188 master-0 kubenswrapper[33013]: I0313 10:59:45.844145 33013 scope.go:117] "RemoveContainer" containerID="87ad10810594d3a8b47e2d2f0ea99d2d22bd3431702cd859d74cd9c630e59378" Mar 13 10:59:45.859717 master-0 kubenswrapper[33013]: I0313 10:59:45.859682 33013 scope.go:117] "RemoveContainer" containerID="1ecea7b8d5e69133ad13b5b777fbc920ab41ff3583523c6b2276f6193ca1bf07" Mar 13 10:59:45.872345 master-0 kubenswrapper[33013]: I0313 10:59:45.872297 33013 scope.go:117] "RemoveContainer" containerID="78a5083f0f4488ca7c6e4d90cf72bc643a68b4410b27b3743964b73f858c2984" Mar 13 10:59:45.887144 master-0 kubenswrapper[33013]: I0313 10:59:45.887105 33013 scope.go:117] "RemoveContainer" containerID="471db2ea2205b6f1d3a5586cdbba3aa6c38a4e80fcf269848ce63dabe96030ca" Mar 13 10:59:45.899952 master-0 kubenswrapper[33013]: I0313 10:59:45.899909 33013 scope.go:117] "RemoveContainer" containerID="34c71f954534cca434c7c802f7855a3b7861fd19181e83ce6a5c7e4eadd5d1b3" Mar 13 10:59:45.900460 master-0 kubenswrapper[33013]: E0313 10:59:45.900411 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34c71f954534cca434c7c802f7855a3b7861fd19181e83ce6a5c7e4eadd5d1b3\": container with ID starting with 34c71f954534cca434c7c802f7855a3b7861fd19181e83ce6a5c7e4eadd5d1b3 not found: ID does not exist" containerID="34c71f954534cca434c7c802f7855a3b7861fd19181e83ce6a5c7e4eadd5d1b3" Mar 13 10:59:45.900540 master-0 kubenswrapper[33013]: I0313 10:59:45.900449 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c71f954534cca434c7c802f7855a3b7861fd19181e83ce6a5c7e4eadd5d1b3"} err="failed to get container status \"34c71f954534cca434c7c802f7855a3b7861fd19181e83ce6a5c7e4eadd5d1b3\": rpc error: code = NotFound desc = could not find container \"34c71f954534cca434c7c802f7855a3b7861fd19181e83ce6a5c7e4eadd5d1b3\": container with ID starting with 34c71f954534cca434c7c802f7855a3b7861fd19181e83ce6a5c7e4eadd5d1b3 not found: ID does not exist" Mar 13 10:59:45.900540 master-0 kubenswrapper[33013]: I0313 10:59:45.900476 33013 scope.go:117] "RemoveContainer" containerID="ba51f5c967029e2b068021d1a83ee4f598eaadf9c3a2516df68908ac239d7445" Mar 13 10:59:45.901097 master-0 kubenswrapper[33013]: E0313 10:59:45.901051 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba51f5c967029e2b068021d1a83ee4f598eaadf9c3a2516df68908ac239d7445\": container with ID starting with ba51f5c967029e2b068021d1a83ee4f598eaadf9c3a2516df68908ac239d7445 not found: ID does not exist" containerID="ba51f5c967029e2b068021d1a83ee4f598eaadf9c3a2516df68908ac239d7445" Mar 13 10:59:45.901097 master-0 kubenswrapper[33013]: I0313 10:59:45.901084 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba51f5c967029e2b068021d1a83ee4f598eaadf9c3a2516df68908ac239d7445"} err="failed to get container status \"ba51f5c967029e2b068021d1a83ee4f598eaadf9c3a2516df68908ac239d7445\": rpc error: code = NotFound desc = could not find container \"ba51f5c967029e2b068021d1a83ee4f598eaadf9c3a2516df68908ac239d7445\": container with ID starting with ba51f5c967029e2b068021d1a83ee4f598eaadf9c3a2516df68908ac239d7445 not found: ID does not exist" Mar 13 10:59:45.901239 master-0 kubenswrapper[33013]: I0313 10:59:45.901105 33013 scope.go:117] "RemoveContainer" containerID="87ad10810594d3a8b47e2d2f0ea99d2d22bd3431702cd859d74cd9c630e59378" Mar 13 10:59:45.901387 master-0 kubenswrapper[33013]: E0313 10:59:45.901361 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ad10810594d3a8b47e2d2f0ea99d2d22bd3431702cd859d74cd9c630e59378\": container with ID starting with 87ad10810594d3a8b47e2d2f0ea99d2d22bd3431702cd859d74cd9c630e59378 not found: ID does not exist" containerID="87ad10810594d3a8b47e2d2f0ea99d2d22bd3431702cd859d74cd9c630e59378" Mar 13 10:59:45.901448 master-0 kubenswrapper[33013]: I0313 10:59:45.901387 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ad10810594d3a8b47e2d2f0ea99d2d22bd3431702cd859d74cd9c630e59378"} err="failed to get container status \"87ad10810594d3a8b47e2d2f0ea99d2d22bd3431702cd859d74cd9c630e59378\": rpc error: code = NotFound desc = could not find container \"87ad10810594d3a8b47e2d2f0ea99d2d22bd3431702cd859d74cd9c630e59378\": container with ID starting with 87ad10810594d3a8b47e2d2f0ea99d2d22bd3431702cd859d74cd9c630e59378 not found: ID does not exist" Mar 13 10:59:45.901448 master-0 kubenswrapper[33013]: I0313 10:59:45.901403 33013 scope.go:117] "RemoveContainer" containerID="1ecea7b8d5e69133ad13b5b777fbc920ab41ff3583523c6b2276f6193ca1bf07" Mar 13 10:59:45.901721 master-0 kubenswrapper[33013]: E0313 10:59:45.901702 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ecea7b8d5e69133ad13b5b777fbc920ab41ff3583523c6b2276f6193ca1bf07\": container with ID starting with 1ecea7b8d5e69133ad13b5b777fbc920ab41ff3583523c6b2276f6193ca1bf07 not found: ID does not exist" containerID="1ecea7b8d5e69133ad13b5b777fbc920ab41ff3583523c6b2276f6193ca1bf07" Mar 13 10:59:45.901721 master-0 kubenswrapper[33013]: I0313 10:59:45.901721 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ecea7b8d5e69133ad13b5b777fbc920ab41ff3583523c6b2276f6193ca1bf07"} err="failed to get container status \"1ecea7b8d5e69133ad13b5b777fbc920ab41ff3583523c6b2276f6193ca1bf07\": rpc error: code = NotFound desc = could not find container \"1ecea7b8d5e69133ad13b5b777fbc920ab41ff3583523c6b2276f6193ca1bf07\": container with ID starting with 1ecea7b8d5e69133ad13b5b777fbc920ab41ff3583523c6b2276f6193ca1bf07 not found: ID does not exist" Mar 13 10:59:45.901838 master-0 kubenswrapper[33013]: I0313 10:59:45.901734 33013 scope.go:117] "RemoveContainer" containerID="78a5083f0f4488ca7c6e4d90cf72bc643a68b4410b27b3743964b73f858c2984" Mar 13 10:59:45.902111 master-0 kubenswrapper[33013]: E0313 10:59:45.902079 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a5083f0f4488ca7c6e4d90cf72bc643a68b4410b27b3743964b73f858c2984\": container with ID starting with 78a5083f0f4488ca7c6e4d90cf72bc643a68b4410b27b3743964b73f858c2984 not found: ID does not exist" containerID="78a5083f0f4488ca7c6e4d90cf72bc643a68b4410b27b3743964b73f858c2984" Mar 13 10:59:45.902186 master-0 kubenswrapper[33013]: I0313 10:59:45.902111 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a5083f0f4488ca7c6e4d90cf72bc643a68b4410b27b3743964b73f858c2984"} err="failed to get container status \"78a5083f0f4488ca7c6e4d90cf72bc643a68b4410b27b3743964b73f858c2984\": rpc error: code = NotFound desc = could not find container \"78a5083f0f4488ca7c6e4d90cf72bc643a68b4410b27b3743964b73f858c2984\": container with ID starting with 78a5083f0f4488ca7c6e4d90cf72bc643a68b4410b27b3743964b73f858c2984 not found: ID does not exist" Mar 13 10:59:45.902186 master-0 kubenswrapper[33013]: I0313 10:59:45.902130 33013 scope.go:117] "RemoveContainer" containerID="471db2ea2205b6f1d3a5586cdbba3aa6c38a4e80fcf269848ce63dabe96030ca" Mar 13 10:59:45.902365 master-0 kubenswrapper[33013]: E0313 10:59:45.902341 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"471db2ea2205b6f1d3a5586cdbba3aa6c38a4e80fcf269848ce63dabe96030ca\": container with ID starting with 471db2ea2205b6f1d3a5586cdbba3aa6c38a4e80fcf269848ce63dabe96030ca not found: ID does not exist" containerID="471db2ea2205b6f1d3a5586cdbba3aa6c38a4e80fcf269848ce63dabe96030ca" Mar 13 10:59:45.902431 master-0 kubenswrapper[33013]: I0313 10:59:45.902368 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"471db2ea2205b6f1d3a5586cdbba3aa6c38a4e80fcf269848ce63dabe96030ca"} err="failed to get container status \"471db2ea2205b6f1d3a5586cdbba3aa6c38a4e80fcf269848ce63dabe96030ca\": rpc error: code = NotFound desc = could not find container \"471db2ea2205b6f1d3a5586cdbba3aa6c38a4e80fcf269848ce63dabe96030ca\": container with ID starting with 471db2ea2205b6f1d3a5586cdbba3aa6c38a4e80fcf269848ce63dabe96030ca not found: ID does not exist" Mar 13 10:59:46.716233 master-0 kubenswrapper[33013]: I0313 10:59:46.716168 33013 status_manager.go:851] "Failed to get status for pod" podUID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:46.720303 master-0 kubenswrapper[33013]: I0313 10:59:46.718629 33013 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:46.722082 master-0 kubenswrapper[33013]: I0313 10:59:46.722058 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="077dd10388b9e3e48a07382126e86621" path="/var/lib/kubelet/pods/077dd10388b9e3e48a07382126e86621/volumes" Mar 13 10:59:46.747737 master-0 kubenswrapper[33013]: E0313 10:59:46.747691 33013 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:46.749351 master-0 kubenswrapper[33013]: E0313 10:59:46.748253 33013 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:46.749692 master-0 kubenswrapper[33013]: E0313 10:59:46.749663 33013 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:46.750210 master-0 kubenswrapper[33013]: E0313 10:59:46.750174 33013 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:46.752677 master-0 kubenswrapper[33013]: E0313 10:59:46.750627 33013 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:46.752677 master-0 kubenswrapper[33013]: I0313 10:59:46.750678 33013 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 10:59:46.752677 master-0 kubenswrapper[33013]: E0313 10:59:46.751176 33013 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 13 10:59:46.953113 master-0 kubenswrapper[33013]: E0313 10:59:46.953012 33013 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 13 10:59:47.355033 master-0 kubenswrapper[33013]: E0313 10:59:47.354938 33013 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 13 10:59:48.156565 master-0 kubenswrapper[33013]: E0313 10:59:48.156502 33013 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 13 10:59:49.462894 master-0 kubenswrapper[33013]: I0313 10:59:49.462810 33013 patch_prober.go:28] interesting pod/console-76bbbbbcd4-rgrm6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 13 10:59:49.463546 master-0 kubenswrapper[33013]: I0313 10:59:49.462896 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76bbbbbcd4-rgrm6" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 13 10:59:49.758512 master-0 kubenswrapper[33013]: E0313 10:59:49.758348 33013 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 13 10:59:50.541014 master-0 kubenswrapper[33013]: E0313 10:59:50.540923 33013 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:59:50Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:59:50Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:59:50Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T10:59:50Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:50.541856 master-0 kubenswrapper[33013]: E0313 10:59:50.541677 33013 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:50.542397 master-0 kubenswrapper[33013]: E0313 10:59:50.542349 33013 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:50.542945 master-0 kubenswrapper[33013]: E0313 10:59:50.542911 33013 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:50.543684 master-0 kubenswrapper[33013]: E0313 10:59:50.543637 33013 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:50.543684 master-0 kubenswrapper[33013]: E0313 10:59:50.543670 33013 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 10:59:51.707705 master-0 kubenswrapper[33013]: I0313 10:59:51.707558 33013 patch_prober.go:28] interesting pod/console-dc44494b5-hphsz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 10:59:51.707705 master-0 kubenswrapper[33013]: I0313 10:59:51.707641 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dc44494b5-hphsz" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 10:59:52.896570 master-0 kubenswrapper[33013]: E0313 10:59:52.896436 33013 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c61867b4886fd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:b275ed7e9ce09d69a66613ca3ae3d89e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 10:59:43.498995453 +0000 UTC m=+166.974948802,LastTimestamp:2026-03-13 10:59:43.498995453 +0000 UTC m=+166.974948802,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 10:59:52.959575 master-0 kubenswrapper[33013]: E0313 10:59:52.959510 33013 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 13 10:59:56.711985 master-0 kubenswrapper[33013]: I0313 10:59:56.711940 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:56.718884 master-0 kubenswrapper[33013]: I0313 10:59:56.718307 33013 status_manager.go:851] "Failed to get status for pod" podUID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:56.719955 master-0 kubenswrapper[33013]: I0313 10:59:56.719866 33013 status_manager.go:851] "Failed to get status for pod" podUID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:56.761532 master-0 kubenswrapper[33013]: I0313 10:59:56.761459 33013 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="a8d83c3a-ec78-4cfc-b980-6bf53176eb21" Mar 13 10:59:56.761532 master-0 kubenswrapper[33013]: I0313 10:59:56.761510 33013 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="a8d83c3a-ec78-4cfc-b980-6bf53176eb21" Mar 13 10:59:56.762572 master-0 kubenswrapper[33013]: E0313 10:59:56.762491 33013 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:56.763222 master-0 kubenswrapper[33013]: I0313 10:59:56.763178 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:56.892108 master-0 kubenswrapper[33013]: I0313 10:59:56.892002 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"415b10b0106aed57706b343ba727edf2e2154ce14d4b38d0605a5bdccd45dede"} Mar 13 10:59:56.895164 master-0 kubenswrapper[33013]: I0313 10:59:56.895113 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_6aa84d96c35221e650d254cec915ee90/kube-controller-manager/0.log" Mar 13 10:59:56.895271 master-0 kubenswrapper[33013]: I0313 10:59:56.895182 33013 generic.go:334] "Generic (PLEG): container finished" podID="6aa84d96c35221e650d254cec915ee90" containerID="b13c60bcec66207b3ea2a744a2bea2122f3896924902c67d892d48a026ec7cde" exitCode=1 Mar 13 10:59:56.895271 master-0 kubenswrapper[33013]: I0313 10:59:56.895225 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6aa84d96c35221e650d254cec915ee90","Type":"ContainerDied","Data":"b13c60bcec66207b3ea2a744a2bea2122f3896924902c67d892d48a026ec7cde"} Mar 13 10:59:56.895966 master-0 kubenswrapper[33013]: I0313 10:59:56.895921 33013 scope.go:117] "RemoveContainer" containerID="b13c60bcec66207b3ea2a744a2bea2122f3896924902c67d892d48a026ec7cde" Mar 13 10:59:56.896762 master-0 kubenswrapper[33013]: I0313 10:59:56.896683 33013 status_manager.go:851] "Failed to get status for pod" podUID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:56.897696 master-0 kubenswrapper[33013]: I0313 10:59:56.897640 33013 status_manager.go:851] "Failed to get status for pod" podUID="6aa84d96c35221e650d254cec915ee90" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:57.286242 master-0 kubenswrapper[33013]: I0313 10:59:57.286058 33013 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:59:57.907013 master-0 kubenswrapper[33013]: I0313 10:59:57.906955 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_6aa84d96c35221e650d254cec915ee90/kube-controller-manager/0.log" Mar 13 10:59:57.907732 master-0 kubenswrapper[33013]: I0313 10:59:57.907214 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6aa84d96c35221e650d254cec915ee90","Type":"ContainerStarted","Data":"4bc2563e4687b16637c0689e693842fbd842df192a43d7ee20a7af39f977383e"} Mar 13 10:59:57.908477 master-0 kubenswrapper[33013]: I0313 10:59:57.908415 33013 status_manager.go:851] "Failed to get status for pod" podUID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:57.909128 master-0 kubenswrapper[33013]: I0313 10:59:57.909067 33013 status_manager.go:851] "Failed to get status for pod" podUID="6aa84d96c35221e650d254cec915ee90" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:57.909830 master-0 kubenswrapper[33013]: I0313 10:59:57.909789 33013 generic.go:334] "Generic (PLEG): container finished" podID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" containerID="5cd915a2e66776dcdcd8bf27d16d13eae52de4aa4682fea497523313dedd25a5" exitCode=0 Mar 13 10:59:57.909891 master-0 kubenswrapper[33013]: I0313 10:59:57.909834 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerDied","Data":"5cd915a2e66776dcdcd8bf27d16d13eae52de4aa4682fea497523313dedd25a5"} Mar 13 10:59:57.910181 master-0 kubenswrapper[33013]: I0313 10:59:57.910146 33013 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="a8d83c3a-ec78-4cfc-b980-6bf53176eb21" Mar 13 10:59:57.910181 master-0 kubenswrapper[33013]: I0313 10:59:57.910178 33013 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="a8d83c3a-ec78-4cfc-b980-6bf53176eb21" Mar 13 10:59:57.910723 master-0 kubenswrapper[33013]: I0313 10:59:57.910679 33013 status_manager.go:851] "Failed to get status for pod" podUID="6aa84d96c35221e650d254cec915ee90" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:57.910916 master-0 kubenswrapper[33013]: E0313 10:59:57.910829 33013 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 10:59:57.911222 master-0 kubenswrapper[33013]: I0313 10:59:57.911177 33013 status_manager.go:851] "Failed to get status for pod" podUID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 10:59:58.563948 master-0 kubenswrapper[33013]: I0313 10:59:58.563870 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 10:59:58.564426 master-0 kubenswrapper[33013]: I0313 10:59:58.564369 33013 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 10:59:58.564494 master-0 kubenswrapper[33013]: I0313 10:59:58.564456 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 10:59:58.924351 master-0 kubenswrapper[33013]: I0313 10:59:58.924288 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"5f0e7de220459d1a13c5b292eb88389b12e65df925164828f2940f1a9cd13e1a"} Mar 13 10:59:58.924351 master-0 kubenswrapper[33013]: I0313 10:59:58.924347 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"5f5ebb9fb117342c7b36175c7066be24622106ce53a121284294c2541189dc22"} Mar 13 10:59:58.924351 master-0 kubenswrapper[33013]: I0313 10:59:58.924360 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"f55b7eb3c931f2149f2fa58169e791c9502de1f1731eb1d1e400ecc29b94d980"} Mar 13 10:59:59.461887 master-0 kubenswrapper[33013]: I0313 10:59:59.461725 33013 patch_prober.go:28] interesting pod/console-76bbbbbcd4-rgrm6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 13 10:59:59.461887 master-0 kubenswrapper[33013]: I0313 10:59:59.461797 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76bbbbbcd4-rgrm6" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 13 10:59:59.935949 master-0 kubenswrapper[33013]: I0313 10:59:59.935889 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"c77950dc8a4f101a5396a99b474ca8486fc947b83044038f48b4f0cd478c836a"} Mar 13 10:59:59.935949 master-0 kubenswrapper[33013]: I0313 10:59:59.935951 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"1fb6c2b850478286673e044c9a6250a296a655520e5ce443c1b84c8661f95d12"} Mar 13 10:59:59.936786 master-0 kubenswrapper[33013]: I0313 10:59:59.936759 33013 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="a8d83c3a-ec78-4cfc-b980-6bf53176eb21" Mar 13 10:59:59.936870 master-0 kubenswrapper[33013]: I0313 10:59:59.936859 33013 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="a8d83c3a-ec78-4cfc-b980-6bf53176eb21" Mar 13 11:00:00.474950 master-0 kubenswrapper[33013]: I0313 11:00:00.474888 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:00:01.707910 master-0 kubenswrapper[33013]: I0313 11:00:01.707837 33013 patch_prober.go:28] interesting pod/console-dc44494b5-hphsz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 11:00:01.708394 master-0 kubenswrapper[33013]: I0313 11:00:01.707921 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dc44494b5-hphsz" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 11:00:01.763605 master-0 kubenswrapper[33013]: I0313 11:00:01.763515 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 11:00:01.763605 master-0 kubenswrapper[33013]: I0313 11:00:01.763601 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 11:00:01.770183 master-0 kubenswrapper[33013]: I0313 11:00:01.770110 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 11:00:04.960208 master-0 kubenswrapper[33013]: I0313 11:00:04.960144 33013 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 11:00:05.979481 master-0 kubenswrapper[33013]: I0313 11:00:05.979419 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 11:00:05.979481 master-0 kubenswrapper[33013]: I0313 11:00:05.979479 33013 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="a8d83c3a-ec78-4cfc-b980-6bf53176eb21" Mar 13 11:00:05.979481 master-0 kubenswrapper[33013]: I0313 11:00:05.979500 33013 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="a8d83c3a-ec78-4cfc-b980-6bf53176eb21" Mar 13 11:00:05.983952 master-0 kubenswrapper[33013]: I0313 11:00:05.983707 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 11:00:06.728932 master-0 kubenswrapper[33013]: I0313 11:00:06.728866 33013 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" podUID="995c6d6b-10a7-4ced-9d38-dbc253b93e7a" Mar 13 11:00:06.986429 master-0 kubenswrapper[33013]: I0313 11:00:06.986288 33013 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="a8d83c3a-ec78-4cfc-b980-6bf53176eb21" Mar 13 11:00:06.986429 master-0 kubenswrapper[33013]: I0313 11:00:06.986330 33013 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="a8d83c3a-ec78-4cfc-b980-6bf53176eb21" Mar 13 11:00:06.990974 master-0 kubenswrapper[33013]: I0313 11:00:06.990916 33013 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" podUID="995c6d6b-10a7-4ced-9d38-dbc253b93e7a" Mar 13 11:00:07.992992 master-0 kubenswrapper[33013]: I0313 11:00:07.992915 33013 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="a8d83c3a-ec78-4cfc-b980-6bf53176eb21" Mar 13 11:00:07.992992 master-0 kubenswrapper[33013]: I0313 11:00:07.992953 33013 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="a8d83c3a-ec78-4cfc-b980-6bf53176eb21" Mar 13 11:00:07.996685 master-0 kubenswrapper[33013]: I0313 11:00:07.996616 33013 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" podUID="995c6d6b-10a7-4ced-9d38-dbc253b93e7a" Mar 13 11:00:08.563986 master-0 kubenswrapper[33013]: I0313 11:00:08.563944 33013 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 11:00:08.564326 master-0 kubenswrapper[33013]: I0313 11:00:08.564290 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 11:00:09.462703 master-0 kubenswrapper[33013]: I0313 11:00:09.462619 33013 patch_prober.go:28] interesting pod/console-76bbbbbcd4-rgrm6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 13 11:00:09.463743 master-0 kubenswrapper[33013]: I0313 11:00:09.462718 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76bbbbbcd4-rgrm6" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 13 11:00:11.708757 master-0 kubenswrapper[33013]: I0313 11:00:11.708678 33013 patch_prober.go:28] interesting pod/console-dc44494b5-hphsz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 11:00:11.709427 master-0 kubenswrapper[33013]: I0313 11:00:11.708773 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dc44494b5-hphsz" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 11:00:14.381080 master-0 kubenswrapper[33013]: I0313 11:00:14.381027 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 11:00:14.662730 master-0 kubenswrapper[33013]: I0313 11:00:14.662533 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 11:00:14.749574 master-0 kubenswrapper[33013]: I0313 11:00:14.749530 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 11:00:14.804463 master-0 kubenswrapper[33013]: I0313 11:00:14.804372 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 13 11:00:14.941845 master-0 kubenswrapper[33013]: I0313 11:00:14.938764 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 11:00:15.572222 master-0 kubenswrapper[33013]: I0313 11:00:15.572153 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 11:00:15.719155 master-0 kubenswrapper[33013]: I0313 11:00:15.719095 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 11:00:16.029893 master-0 kubenswrapper[33013]: I0313 11:00:16.029763 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 13 11:00:16.196223 master-0 kubenswrapper[33013]: I0313 11:00:16.196166 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 13 11:00:16.201608 master-0 kubenswrapper[33013]: I0313 11:00:16.201555 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 13 11:00:16.404161 master-0 kubenswrapper[33013]: I0313 11:00:16.404093 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 11:00:16.587401 master-0 kubenswrapper[33013]: I0313 11:00:16.586327 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 13 11:00:16.744913 master-0 kubenswrapper[33013]: I0313 11:00:16.744523 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 11:00:16.838900 master-0 kubenswrapper[33013]: I0313 11:00:16.838830 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 11:00:16.865978 master-0 kubenswrapper[33013]: I0313 11:00:16.865931 33013 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 11:00:16.964055 master-0 kubenswrapper[33013]: I0313 11:00:16.963987 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 13 11:00:17.170932 master-0 kubenswrapper[33013]: I0313 11:00:17.170884 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 11:00:17.191641 master-0 kubenswrapper[33013]: I0313 11:00:17.191565 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 11:00:17.313465 master-0 kubenswrapper[33013]: I0313 11:00:17.313322 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 11:00:17.417110 master-0 kubenswrapper[33013]: I0313 11:00:17.417050 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 11:00:17.435712 master-0 kubenswrapper[33013]: I0313 11:00:17.435665 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-ntlbj" Mar 13 11:00:17.481001 master-0 kubenswrapper[33013]: I0313 11:00:17.480959 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 11:00:17.556163 master-0 kubenswrapper[33013]: I0313 11:00:17.556106 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 11:00:17.721722 master-0 kubenswrapper[33013]: I0313 11:00:17.721574 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 11:00:17.815103 master-0 kubenswrapper[33013]: I0313 11:00:17.815025 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 13 11:00:17.884853 master-0 kubenswrapper[33013]: I0313 11:00:17.884794 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 13 11:00:17.910173 master-0 kubenswrapper[33013]: I0313 11:00:17.910131 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 13 11:00:17.915813 master-0 kubenswrapper[33013]: I0313 11:00:17.915762 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 11:00:17.932749 master-0 kubenswrapper[33013]: I0313 11:00:17.932687 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 11:00:18.053769 master-0 kubenswrapper[33013]: I0313 11:00:18.053717 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-tf6mr" Mar 13 11:00:18.260766 master-0 kubenswrapper[33013]: I0313 11:00:18.260698 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-7mc4m" Mar 13 11:00:18.273900 master-0 kubenswrapper[33013]: I0313 11:00:18.273853 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 11:00:18.311158 master-0 kubenswrapper[33013]: I0313 11:00:18.311030 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 13 11:00:18.405777 master-0 kubenswrapper[33013]: I0313 11:00:18.405703 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 11:00:18.526966 master-0 kubenswrapper[33013]: I0313 11:00:18.526905 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-chx8x" Mar 13 11:00:18.564403 master-0 kubenswrapper[33013]: I0313 11:00:18.564265 33013 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 11:00:18.564403 master-0 kubenswrapper[33013]: I0313 11:00:18.564340 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 11:00:18.564403 master-0 kubenswrapper[33013]: I0313 11:00:18.564397 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:00:18.565110 master-0 kubenswrapper[33013]: I0313 11:00:18.565073 33013 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"4bc2563e4687b16637c0689e693842fbd842df192a43d7ee20a7af39f977383e"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 11:00:18.565204 master-0 kubenswrapper[33013]: I0313 11:00:18.565180 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager" containerID="cri-o://4bc2563e4687b16637c0689e693842fbd842df192a43d7ee20a7af39f977383e" gracePeriod=30 Mar 13 11:00:18.575542 master-0 kubenswrapper[33013]: I0313 11:00:18.575499 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 11:00:18.684127 master-0 kubenswrapper[33013]: I0313 11:00:18.684059 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 13 11:00:18.705735 master-0 kubenswrapper[33013]: I0313 11:00:18.705682 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 11:00:18.733196 master-0 kubenswrapper[33013]: I0313 11:00:18.733122 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 11:00:18.743203 master-0 kubenswrapper[33013]: I0313 11:00:18.743143 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 13 11:00:18.820309 master-0 kubenswrapper[33013]: I0313 11:00:18.820153 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 11:00:18.961254 master-0 kubenswrapper[33013]: I0313 11:00:18.961190 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 11:00:18.997575 master-0 kubenswrapper[33013]: I0313 11:00:18.997458 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 13 11:00:19.071287 master-0 kubenswrapper[33013]: I0313 11:00:19.071030 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 11:00:19.135945 master-0 kubenswrapper[33013]: I0313 11:00:19.135893 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9b2d2" Mar 13 11:00:19.212947 master-0 kubenswrapper[33013]: I0313 11:00:19.212902 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 11:00:19.215838 master-0 kubenswrapper[33013]: I0313 11:00:19.215816 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 11:00:19.305033 master-0 kubenswrapper[33013]: I0313 11:00:19.304978 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 11:00:19.346794 master-0 kubenswrapper[33013]: I0313 11:00:19.346662 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 13 11:00:19.353933 master-0 kubenswrapper[33013]: I0313 11:00:19.353896 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 13 11:00:19.389672 master-0 kubenswrapper[33013]: I0313 11:00:19.389619 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 11:00:19.405300 master-0 kubenswrapper[33013]: I0313 11:00:19.405227 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 11:00:19.462159 master-0 kubenswrapper[33013]: I0313 11:00:19.462078 33013 patch_prober.go:28] interesting pod/console-76bbbbbcd4-rgrm6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 13 11:00:19.462159 master-0 kubenswrapper[33013]: I0313 11:00:19.462155 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76bbbbbcd4-rgrm6" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 13 11:00:19.498207 master-0 kubenswrapper[33013]: I0313 11:00:19.497961 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 11:00:19.498536 master-0 kubenswrapper[33013]: I0313 11:00:19.498323 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 11:00:19.498666 master-0 kubenswrapper[33013]: I0313 11:00:19.498552 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 11:00:19.567387 master-0 kubenswrapper[33013]: I0313 11:00:19.567147 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 11:00:19.636265 master-0 kubenswrapper[33013]: I0313 11:00:19.636111 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 11:00:19.647284 master-0 kubenswrapper[33013]: I0313 11:00:19.647247 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 11:00:19.685187 master-0 kubenswrapper[33013]: I0313 11:00:19.685153 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 13 11:00:19.691501 master-0 kubenswrapper[33013]: I0313 11:00:19.691461 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-q8ddz" Mar 13 11:00:19.717100 master-0 kubenswrapper[33013]: I0313 11:00:19.716968 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 13 11:00:19.758425 master-0 kubenswrapper[33013]: I0313 11:00:19.758364 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 11:00:19.771135 master-0 kubenswrapper[33013]: I0313 11:00:19.771053 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 13 11:00:19.940572 master-0 kubenswrapper[33013]: I0313 11:00:19.940412 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 13 11:00:19.998484 master-0 kubenswrapper[33013]: I0313 11:00:19.998442 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-27hpj" Mar 13 11:00:20.220364 master-0 kubenswrapper[33013]: I0313 11:00:20.220244 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 11:00:20.293715 master-0 kubenswrapper[33013]: I0313 11:00:20.293651 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 11:00:20.354017 master-0 kubenswrapper[33013]: I0313 11:00:20.353959 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 11:00:20.420098 master-0 kubenswrapper[33013]: I0313 11:00:20.420032 33013 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 11:00:20.460865 master-0 kubenswrapper[33013]: I0313 11:00:20.460802 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 13 11:00:20.472834 master-0 kubenswrapper[33013]: I0313 11:00:20.472692 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 11:00:20.509437 master-0 kubenswrapper[33013]: I0313 11:00:20.509397 33013 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 11:00:20.510906 master-0 kubenswrapper[33013]: I0313 11:00:20.510865 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 11:00:20.519813 master-0 kubenswrapper[33013]: I0313 11:00:20.519772 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 11:00:20.528761 master-0 kubenswrapper[33013]: I0313 11:00:20.528704 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 13 11:00:20.567145 master-0 kubenswrapper[33013]: I0313 11:00:20.567068 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 13 11:00:20.589960 master-0 kubenswrapper[33013]: I0313 11:00:20.589911 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 11:00:20.690927 master-0 kubenswrapper[33013]: I0313 11:00:20.690872 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 11:00:20.867276 master-0 kubenswrapper[33013]: I0313 11:00:20.867216 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 11:00:20.878300 master-0 kubenswrapper[33013]: I0313 11:00:20.878248 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 13 11:00:20.947946 master-0 kubenswrapper[33013]: I0313 11:00:20.947887 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 13 11:00:21.022458 master-0 kubenswrapper[33013]: I0313 11:00:21.022374 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 11:00:21.028722 master-0 kubenswrapper[33013]: I0313 11:00:21.028673 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 11:00:21.046698 master-0 kubenswrapper[33013]: I0313 11:00:21.046635 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-bk5cz" Mar 13 11:00:21.174251 master-0 kubenswrapper[33013]: I0313 11:00:21.174114 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 11:00:21.197534 master-0 kubenswrapper[33013]: I0313 11:00:21.197497 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-4j4rp" Mar 13 11:00:21.218489 master-0 kubenswrapper[33013]: I0313 11:00:21.218438 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 13 11:00:21.360086 master-0 kubenswrapper[33013]: I0313 11:00:21.360013 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-frjx4" Mar 13 11:00:21.408841 master-0 kubenswrapper[33013]: I0313 11:00:21.408757 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 11:00:21.480081 master-0 kubenswrapper[33013]: I0313 11:00:21.479944 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 11:00:21.558112 master-0 kubenswrapper[33013]: I0313 11:00:21.558047 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 11:00:21.563865 master-0 kubenswrapper[33013]: I0313 11:00:21.563805 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 11:00:21.652263 master-0 kubenswrapper[33013]: I0313 11:00:21.652225 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 11:00:21.690275 master-0 kubenswrapper[33013]: I0313 11:00:21.690216 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 11:00:21.707732 master-0 kubenswrapper[33013]: I0313 11:00:21.707666 33013 patch_prober.go:28] interesting pod/console-dc44494b5-hphsz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 11:00:21.708009 master-0 kubenswrapper[33013]: I0313 11:00:21.707737 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-dc44494b5-hphsz" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 11:00:21.726756 master-0 kubenswrapper[33013]: I0313 11:00:21.726698 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 11:00:21.755366 master-0 kubenswrapper[33013]: I0313 11:00:21.755208 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 13 11:00:21.770933 master-0 kubenswrapper[33013]: I0313 11:00:21.770874 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 11:00:21.926226 master-0 kubenswrapper[33013]: I0313 11:00:21.926175 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 13 11:00:21.943495 master-0 kubenswrapper[33013]: I0313 11:00:21.943454 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 11:00:21.966440 master-0 kubenswrapper[33013]: I0313 11:00:21.966392 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 11:00:22.153046 master-0 kubenswrapper[33013]: I0313 11:00:22.152990 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 11:00:22.161848 master-0 kubenswrapper[33013]: I0313 11:00:22.161789 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-jzkqp" Mar 13 11:00:22.177367 master-0 kubenswrapper[33013]: I0313 11:00:22.177316 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 11:00:22.228493 master-0 kubenswrapper[33013]: I0313 11:00:22.228432 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 11:00:22.361029 master-0 kubenswrapper[33013]: I0313 11:00:22.360980 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 11:00:22.370293 master-0 kubenswrapper[33013]: I0313 11:00:22.370239 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 11:00:22.431972 master-0 kubenswrapper[33013]: I0313 11:00:22.431827 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 11:00:22.507488 master-0 kubenswrapper[33013]: I0313 11:00:22.507434 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 13 11:00:22.557126 master-0 kubenswrapper[33013]: I0313 11:00:22.556931 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 13 11:00:22.573799 master-0 kubenswrapper[33013]: I0313 11:00:22.573735 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 13 11:00:22.612517 master-0 kubenswrapper[33013]: I0313 11:00:22.612459 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 11:00:22.681492 master-0 kubenswrapper[33013]: I0313 11:00:22.681416 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 11:00:22.764296 master-0 kubenswrapper[33013]: I0313 11:00:22.764143 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 11:00:22.836725 master-0 kubenswrapper[33013]: I0313 11:00:22.836644 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 11:00:22.836936 master-0 kubenswrapper[33013]: I0313 11:00:22.836898 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 11:00:22.858044 master-0 kubenswrapper[33013]: I0313 11:00:22.857989 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 11:00:22.897170 master-0 kubenswrapper[33013]: I0313 11:00:22.897092 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 11:00:22.898421 master-0 kubenswrapper[33013]: I0313 11:00:22.898400 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 11:00:22.902499 master-0 kubenswrapper[33013]: I0313 11:00:22.902462 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 11:00:22.969245 master-0 kubenswrapper[33013]: I0313 11:00:22.969179 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 11:00:22.969945 master-0 kubenswrapper[33013]: I0313 11:00:22.969914 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-l5tkf" Mar 13 11:00:23.047244 master-0 kubenswrapper[33013]: I0313 11:00:23.047163 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-lr8wh" Mar 13 11:00:23.061076 master-0 kubenswrapper[33013]: I0313 11:00:23.061016 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-ng7z2" Mar 13 11:00:23.070408 master-0 kubenswrapper[33013]: I0313 11:00:23.070361 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 11:00:23.095324 master-0 kubenswrapper[33013]: I0313 11:00:23.095246 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 11:00:23.107813 master-0 kubenswrapper[33013]: I0313 11:00:23.107750 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 11:00:23.124371 master-0 kubenswrapper[33013]: I0313 11:00:23.124326 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 11:00:23.202027 master-0 kubenswrapper[33013]: I0313 11:00:23.201958 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 11:00:23.216513 master-0 kubenswrapper[33013]: I0313 11:00:23.216461 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 11:00:23.227763 master-0 kubenswrapper[33013]: I0313 11:00:23.227718 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-45xkz" Mar 13 11:00:23.254526 master-0 kubenswrapper[33013]: I0313 11:00:23.254476 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 11:00:23.255295 master-0 kubenswrapper[33013]: I0313 11:00:23.255266 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-98k6z" Mar 13 11:00:23.271299 master-0 kubenswrapper[33013]: I0313 11:00:23.271256 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 11:00:23.291291 master-0 kubenswrapper[33013]: I0313 11:00:23.291230 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 13 11:00:23.347273 master-0 kubenswrapper[33013]: I0313 11:00:23.347125 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 11:00:23.376277 master-0 kubenswrapper[33013]: I0313 11:00:23.376206 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 11:00:23.405165 master-0 kubenswrapper[33013]: I0313 11:00:23.405072 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-4xgf9" Mar 13 11:00:23.491519 master-0 kubenswrapper[33013]: I0313 11:00:23.491435 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 11:00:23.516147 master-0 kubenswrapper[33013]: I0313 11:00:23.516100 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 11:00:23.554729 master-0 kubenswrapper[33013]: I0313 11:00:23.554658 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-4xv4g" Mar 13 11:00:23.620669 master-0 kubenswrapper[33013]: I0313 11:00:23.620442 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 11:00:23.641323 master-0 kubenswrapper[33013]: I0313 11:00:23.641279 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 13 11:00:23.737161 master-0 kubenswrapper[33013]: I0313 11:00:23.737114 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 13 11:00:23.775547 master-0 kubenswrapper[33013]: I0313 11:00:23.775501 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 13 11:00:23.796754 master-0 kubenswrapper[33013]: I0313 11:00:23.796657 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 11:00:23.830655 master-0 kubenswrapper[33013]: I0313 11:00:23.830616 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 11:00:23.870915 master-0 kubenswrapper[33013]: I0313 11:00:23.870710 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-jkx4c" Mar 13 11:00:23.916056 master-0 kubenswrapper[33013]: I0313 11:00:23.916000 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 11:00:23.937883 master-0 kubenswrapper[33013]: I0313 11:00:23.937832 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 11:00:24.046748 master-0 kubenswrapper[33013]: I0313 11:00:24.046652 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 11:00:24.084696 master-0 kubenswrapper[33013]: I0313 11:00:24.084612 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 13 11:00:24.192730 master-0 kubenswrapper[33013]: I0313 11:00:24.192336 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 13 11:00:24.274628 master-0 kubenswrapper[33013]: I0313 11:00:24.274560 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 13 11:00:24.384738 master-0 kubenswrapper[33013]: I0313 11:00:24.384697 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 11:00:24.421357 master-0 kubenswrapper[33013]: I0313 11:00:24.421318 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 11:00:24.448972 master-0 kubenswrapper[33013]: I0313 11:00:24.448489 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 11:00:24.501032 master-0 kubenswrapper[33013]: I0313 11:00:24.500938 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 11:00:24.591952 master-0 kubenswrapper[33013]: I0313 11:00:24.591913 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 11:00:24.626886 master-0 kubenswrapper[33013]: I0313 11:00:24.626838 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 13 11:00:24.629421 master-0 kubenswrapper[33013]: I0313 11:00:24.629315 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 11:00:24.641170 master-0 kubenswrapper[33013]: I0313 11:00:24.641117 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 13 11:00:24.676800 master-0 kubenswrapper[33013]: I0313 11:00:24.676760 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 11:00:24.766239 master-0 kubenswrapper[33013]: I0313 11:00:24.766081 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 13 11:00:24.812675 master-0 kubenswrapper[33013]: I0313 11:00:24.812436 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 11:00:24.853515 master-0 kubenswrapper[33013]: I0313 11:00:24.853452 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 13 11:00:24.924736 master-0 kubenswrapper[33013]: I0313 11:00:24.924669 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 11:00:24.980615 master-0 kubenswrapper[33013]: I0313 11:00:24.977562 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 11:00:25.025719 master-0 kubenswrapper[33013]: I0313 11:00:25.025550 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 11:00:25.029377 master-0 kubenswrapper[33013]: I0313 11:00:25.029341 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 11:00:25.067440 master-0 kubenswrapper[33013]: I0313 11:00:25.062043 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 11:00:25.090290 master-0 kubenswrapper[33013]: I0313 11:00:25.090227 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 11:00:25.129229 master-0 kubenswrapper[33013]: I0313 11:00:25.129114 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 11:00:25.156866 master-0 kubenswrapper[33013]: I0313 11:00:25.156811 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 11:00:25.161317 master-0 kubenswrapper[33013]: I0313 11:00:25.161276 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 13 11:00:25.162436 master-0 kubenswrapper[33013]: I0313 11:00:25.162407 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 11:00:25.240888 master-0 kubenswrapper[33013]: I0313 11:00:25.240813 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 11:00:25.261995 master-0 kubenswrapper[33013]: I0313 11:00:25.261934 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 13 11:00:25.294725 master-0 kubenswrapper[33013]: I0313 11:00:25.294554 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 11:00:25.302831 master-0 kubenswrapper[33013]: I0313 11:00:25.300569 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 13 11:00:25.319209 master-0 kubenswrapper[33013]: I0313 11:00:25.319162 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 11:00:25.330484 master-0 kubenswrapper[33013]: I0313 11:00:25.330396 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-jjg66" Mar 13 11:00:25.362983 master-0 kubenswrapper[33013]: I0313 11:00:25.362910 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 11:00:25.376543 master-0 kubenswrapper[33013]: I0313 11:00:25.376443 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 11:00:25.394357 master-0 kubenswrapper[33013]: I0313 11:00:25.394253 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 11:00:25.411186 master-0 kubenswrapper[33013]: I0313 11:00:25.411078 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-t9jpj" Mar 13 11:00:25.461977 master-0 kubenswrapper[33013]: I0313 11:00:25.461876 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-fwh6p" Mar 13 11:00:25.493479 master-0 kubenswrapper[33013]: I0313 11:00:25.493404 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 11:00:25.519323 master-0 kubenswrapper[33013]: I0313 11:00:25.519254 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 13 11:00:25.535269 master-0 kubenswrapper[33013]: I0313 11:00:25.535208 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 11:00:25.577703 master-0 kubenswrapper[33013]: I0313 11:00:25.577560 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 13 11:00:25.694676 master-0 kubenswrapper[33013]: I0313 11:00:25.694570 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 13 11:00:25.746139 master-0 kubenswrapper[33013]: I0313 11:00:25.746089 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 13 11:00:25.768728 master-0 kubenswrapper[33013]: I0313 11:00:25.768673 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 11:00:25.783742 master-0 kubenswrapper[33013]: I0313 11:00:25.783698 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 13 11:00:25.803368 master-0 kubenswrapper[33013]: I0313 11:00:25.803288 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 13 11:00:25.896478 master-0 kubenswrapper[33013]: I0313 11:00:25.896350 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 11:00:25.942110 master-0 kubenswrapper[33013]: I0313 11:00:25.942053 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 11:00:25.946337 master-0 kubenswrapper[33013]: I0313 11:00:25.946302 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 11:00:26.011693 master-0 kubenswrapper[33013]: I0313 11:00:26.011566 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 11:00:26.023935 master-0 kubenswrapper[33013]: I0313 11:00:26.023878 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 11:00:26.123683 master-0 kubenswrapper[33013]: I0313 11:00:26.123638 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 13 11:00:26.125791 master-0 kubenswrapper[33013]: I0313 11:00:26.125737 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 11:00:26.170687 master-0 kubenswrapper[33013]: I0313 11:00:26.170489 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 11:00:26.184174 master-0 kubenswrapper[33013]: I0313 11:00:26.183964 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 11:00:26.277783 master-0 kubenswrapper[33013]: I0313 11:00:26.277712 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 11:00:26.325908 master-0 kubenswrapper[33013]: I0313 11:00:26.325846 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 11:00:26.363070 master-0 kubenswrapper[33013]: I0313 11:00:26.363005 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-bfgw8" Mar 13 11:00:26.369796 master-0 kubenswrapper[33013]: I0313 11:00:26.369765 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-zwgvd" Mar 13 11:00:26.469811 master-0 kubenswrapper[33013]: I0313 11:00:26.469661 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 13 11:00:26.491286 master-0 kubenswrapper[33013]: I0313 11:00:26.488836 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-5p4h2" Mar 13 11:00:26.517710 master-0 kubenswrapper[33013]: I0313 11:00:26.517657 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 13 11:00:26.542039 master-0 kubenswrapper[33013]: I0313 11:00:26.541989 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-w2czt" Mar 13 11:00:26.548847 master-0 kubenswrapper[33013]: I0313 11:00:26.548695 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 11:00:26.576319 master-0 kubenswrapper[33013]: I0313 11:00:26.576267 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 11:00:26.579950 master-0 kubenswrapper[33013]: I0313 11:00:26.579917 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 11:00:26.584171 master-0 kubenswrapper[33013]: I0313 11:00:26.584144 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 11:00:26.606362 master-0 kubenswrapper[33013]: I0313 11:00:26.606323 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 13 11:00:26.614049 master-0 kubenswrapper[33013]: I0313 11:00:26.614005 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 13 11:00:26.711286 master-0 kubenswrapper[33013]: I0313 11:00:26.711230 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-fn5mm" Mar 13 11:00:26.852898 master-0 kubenswrapper[33013]: I0313 11:00:26.852834 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 11:00:26.856448 master-0 kubenswrapper[33013]: I0313 11:00:26.856418 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 11:00:26.869892 master-0 kubenswrapper[33013]: I0313 11:00:26.869840 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 13 11:00:26.884526 master-0 kubenswrapper[33013]: I0313 11:00:26.884488 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-6rglb" Mar 13 11:00:26.943347 master-0 kubenswrapper[33013]: I0313 11:00:26.943284 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 11:00:26.952733 master-0 kubenswrapper[33013]: I0313 11:00:26.952689 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 11:00:26.955967 master-0 kubenswrapper[33013]: I0313 11:00:26.955910 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-xdq92" Mar 13 11:00:26.956967 master-0 kubenswrapper[33013]: I0313 11:00:26.956913 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 11:00:26.983709 master-0 kubenswrapper[33013]: I0313 11:00:26.983630 33013 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 11:00:27.016607 master-0 kubenswrapper[33013]: I0313 11:00:27.016536 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 11:00:27.020661 master-0 kubenswrapper[33013]: I0313 11:00:27.020612 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 11:00:27.112107 master-0 kubenswrapper[33013]: I0313 11:00:27.111989 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 11:00:27.124778 master-0 kubenswrapper[33013]: I0313 11:00:27.124754 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 13 11:00:27.133282 master-0 kubenswrapper[33013]: I0313 11:00:27.133248 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 11:00:27.204136 master-0 kubenswrapper[33013]: I0313 11:00:27.204055 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 11:00:27.244927 master-0 kubenswrapper[33013]: I0313 11:00:27.243963 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 13 11:00:27.254707 master-0 kubenswrapper[33013]: I0313 11:00:27.253891 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 11:00:27.297231 master-0 kubenswrapper[33013]: I0313 11:00:27.297178 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 11:00:27.340963 master-0 kubenswrapper[33013]: I0313 11:00:27.340889 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-rxbss" Mar 13 11:00:27.400032 master-0 kubenswrapper[33013]: I0313 11:00:27.399874 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 11:00:27.402539 master-0 kubenswrapper[33013]: I0313 11:00:27.402494 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 11:00:27.571793 master-0 kubenswrapper[33013]: I0313 11:00:27.571732 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 13 11:00:27.590382 master-0 kubenswrapper[33013]: I0313 11:00:27.590310 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 11:00:27.605259 master-0 kubenswrapper[33013]: I0313 11:00:27.605183 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7mk1tpvcusf46" Mar 13 11:00:27.610636 master-0 kubenswrapper[33013]: I0313 11:00:27.610599 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 11:00:27.620253 master-0 kubenswrapper[33013]: I0313 11:00:27.620184 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 11:00:27.649015 master-0 kubenswrapper[33013]: I0313 11:00:27.648911 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-x42l5" Mar 13 11:00:27.651536 master-0 kubenswrapper[33013]: I0313 11:00:27.651407 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 13 11:00:27.776176 master-0 kubenswrapper[33013]: I0313 11:00:27.776086 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 11:00:27.801312 master-0 kubenswrapper[33013]: I0313 11:00:27.801225 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 13 11:00:27.938349 master-0 kubenswrapper[33013]: I0313 11:00:27.938199 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 11:00:27.995726 master-0 kubenswrapper[33013]: I0313 11:00:27.995644 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-4gsfk" Mar 13 11:00:28.035259 master-0 kubenswrapper[33013]: I0313 11:00:28.035171 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 11:00:28.099269 master-0 kubenswrapper[33013]: I0313 11:00:28.099205 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 11:00:28.140624 master-0 kubenswrapper[33013]: I0313 11:00:28.140569 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 11:00:28.142289 master-0 kubenswrapper[33013]: I0313 11:00:28.142235 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-xpxj2" Mar 13 11:00:28.161995 master-0 kubenswrapper[33013]: I0313 11:00:28.161899 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 11:00:28.341640 master-0 kubenswrapper[33013]: I0313 11:00:28.341549 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 11:00:28.426053 master-0 kubenswrapper[33013]: I0313 11:00:28.425927 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 11:00:28.434764 master-0 kubenswrapper[33013]: I0313 11:00:28.434668 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 13 11:00:28.538799 master-0 kubenswrapper[33013]: I0313 11:00:28.538643 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 13 11:00:28.540111 master-0 kubenswrapper[33013]: I0313 11:00:28.540043 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 11:00:28.751696 master-0 kubenswrapper[33013]: I0313 11:00:28.751519 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 11:00:28.819419 master-0 kubenswrapper[33013]: I0313 11:00:28.819348 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 13 11:00:28.843148 master-0 kubenswrapper[33013]: I0313 11:00:28.843070 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 11:00:28.899817 master-0 kubenswrapper[33013]: I0313 11:00:28.899745 33013 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 11:00:28.906492 master-0 kubenswrapper[33013]: I0313 11:00:28.906405 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 11:00:28.906662 master-0 kubenswrapper[33013]: I0313 11:00:28.906510 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 11:00:28.910723 master-0 kubenswrapper[33013]: I0313 11:00:28.910703 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 11:00:28.928403 master-0 kubenswrapper[33013]: I0313 11:00:28.928331 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=24.928310157 podStartE2EDuration="24.928310157s" podCreationTimestamp="2026-03-13 11:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:00:28.925888759 +0000 UTC m=+212.401842108" watchObservedRunningTime="2026-03-13 11:00:28.928310157 +0000 UTC m=+212.404263506" Mar 13 11:00:29.091505 master-0 kubenswrapper[33013]: I0313 11:00:29.091418 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 13 11:00:29.117781 master-0 kubenswrapper[33013]: I0313 11:00:29.117040 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 11:00:29.195218 master-0 kubenswrapper[33013]: I0313 11:00:29.195134 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 11:00:29.203698 master-0 kubenswrapper[33013]: I0313 11:00:29.203629 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 11:00:29.286981 master-0 kubenswrapper[33013]: I0313 11:00:29.286888 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 11:00:29.297232 master-0 kubenswrapper[33013]: I0313 11:00:29.297166 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 11:00:29.390460 master-0 kubenswrapper[33013]: I0313 11:00:29.390222 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-ncbcz" Mar 13 11:00:29.399248 master-0 kubenswrapper[33013]: I0313 11:00:29.399163 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 11:00:29.418261 master-0 kubenswrapper[33013]: I0313 11:00:29.418144 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 13 11:00:29.462927 master-0 kubenswrapper[33013]: I0313 11:00:29.462816 33013 patch_prober.go:28] interesting pod/console-76bbbbbcd4-rgrm6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 13 11:00:29.462927 master-0 kubenswrapper[33013]: I0313 11:00:29.462903 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76bbbbbcd4-rgrm6" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 13 11:00:29.555671 master-0 kubenswrapper[33013]: I0313 11:00:29.555529 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 13 11:00:29.820157 master-0 kubenswrapper[33013]: I0313 11:00:29.820068 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 11:00:29.825755 master-0 kubenswrapper[33013]: I0313 11:00:29.825697 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 11:00:30.034520 master-0 kubenswrapper[33013]: I0313 11:00:30.034409 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 13 11:00:30.038000 master-0 kubenswrapper[33013]: I0313 11:00:30.037914 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 11:00:30.044038 master-0 kubenswrapper[33013]: I0313 11:00:30.043956 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 13 11:00:30.174675 master-0 kubenswrapper[33013]: I0313 11:00:30.174402 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 13 11:00:30.241291 master-0 kubenswrapper[33013]: I0313 11:00:30.241220 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 13 11:00:30.325558 master-0 kubenswrapper[33013]: I0313 11:00:30.325492 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 11:00:30.359477 master-0 kubenswrapper[33013]: I0313 11:00:30.359406 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-4mksc" Mar 13 11:00:30.375506 master-0 kubenswrapper[33013]: I0313 11:00:30.375442 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 11:00:30.689644 master-0 kubenswrapper[33013]: I0313 11:00:30.689563 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 11:00:30.701978 master-0 kubenswrapper[33013]: I0313 11:00:30.701905 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-fdstf" Mar 13 11:00:30.759293 master-0 kubenswrapper[33013]: I0313 11:00:30.759204 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 13 11:00:30.771164 master-0 kubenswrapper[33013]: I0313 11:00:30.771045 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 11:00:30.830337 master-0 kubenswrapper[33013]: I0313 11:00:30.830266 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 11:00:30.992918 master-0 kubenswrapper[33013]: I0313 11:00:30.992696 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 11:00:31.038573 master-0 kubenswrapper[33013]: I0313 11:00:31.038444 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 11:00:31.105406 master-0 kubenswrapper[33013]: I0313 11:00:31.105315 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 11:00:31.716337 master-0 kubenswrapper[33013]: I0313 11:00:31.716249 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-dc44494b5-hphsz" Mar 13 11:00:31.725355 master-0 kubenswrapper[33013]: I0313 11:00:31.725272 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-dc44494b5-hphsz" Mar 13 11:00:32.299622 master-0 kubenswrapper[33013]: I0313 11:00:32.299520 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 11:00:38.742567 master-0 kubenswrapper[33013]: I0313 11:00:38.742501 33013 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 11:00:38.743206 master-0 kubenswrapper[33013]: I0313 11:00:38.742800 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" containerID="cri-o://714ae15495778fcf056c5fdbee5044a806f2fb2c6cea9cfee5beae8f8c530b70" gracePeriod=5 Mar 13 11:00:39.466604 master-0 kubenswrapper[33013]: I0313 11:00:39.466503 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 11:00:39.471823 master-0 kubenswrapper[33013]: I0313 11:00:39.471752 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 11:00:44.257466 master-0 kubenswrapper[33013]: I0313 11:00:44.257394 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_b275ed7e9ce09d69a66613ca3ae3d89e/startup-monitor/0.log" Mar 13 11:00:44.257466 master-0 kubenswrapper[33013]: I0313 11:00:44.257454 33013 generic.go:334] "Generic (PLEG): container finished" podID="b275ed7e9ce09d69a66613ca3ae3d89e" containerID="714ae15495778fcf056c5fdbee5044a806f2fb2c6cea9cfee5beae8f8c530b70" exitCode=137 Mar 13 11:00:44.311891 master-0 kubenswrapper[33013]: I0313 11:00:44.311811 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_b275ed7e9ce09d69a66613ca3ae3d89e/startup-monitor/0.log" Mar 13 11:00:44.312104 master-0 kubenswrapper[33013]: I0313 11:00:44.311911 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 11:00:44.411612 master-0 kubenswrapper[33013]: I0313 11:00:44.411500 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 13 11:00:44.411889 master-0 kubenswrapper[33013]: I0313 11:00:44.411654 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 13 11:00:44.411889 master-0 kubenswrapper[33013]: I0313 11:00:44.411681 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests" (OuterVolumeSpecName: "manifests") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:00:44.411889 master-0 kubenswrapper[33013]: I0313 11:00:44.411751 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 13 11:00:44.411889 master-0 kubenswrapper[33013]: I0313 11:00:44.411783 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 13 11:00:44.411889 master-0 kubenswrapper[33013]: I0313 11:00:44.411826 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 13 11:00:44.411889 master-0 kubenswrapper[33013]: I0313 11:00:44.411768 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:00:44.411889 master-0 kubenswrapper[33013]: I0313 11:00:44.411888 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock" (OuterVolumeSpecName: "var-lock") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:00:44.412204 master-0 kubenswrapper[33013]: I0313 11:00:44.412013 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log" (OuterVolumeSpecName: "var-log") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:00:44.412296 master-0 kubenswrapper[33013]: I0313 11:00:44.412270 33013 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 11:00:44.412296 master-0 kubenswrapper[33013]: I0313 11:00:44.412290 33013 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 11:00:44.412361 master-0 kubenswrapper[33013]: I0313 11:00:44.412300 33013 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") on node \"master-0\" DevicePath \"\"" Mar 13 11:00:44.412361 master-0 kubenswrapper[33013]: I0313 11:00:44.412308 33013 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") on node \"master-0\" DevicePath \"\"" Mar 13 11:00:44.417179 master-0 kubenswrapper[33013]: I0313 11:00:44.417117 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:00:44.513753 master-0 kubenswrapper[33013]: I0313 11:00:44.513554 33013 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 11:00:44.720345 master-0 kubenswrapper[33013]: I0313 11:00:44.720279 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" path="/var/lib/kubelet/pods/b275ed7e9ce09d69a66613ca3ae3d89e/volumes" Mar 13 11:00:45.269131 master-0 kubenswrapper[33013]: I0313 11:00:45.269083 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_b275ed7e9ce09d69a66613ca3ae3d89e/startup-monitor/0.log" Mar 13 11:00:45.269689 master-0 kubenswrapper[33013]: I0313 11:00:45.269193 33013 scope.go:117] "RemoveContainer" containerID="714ae15495778fcf056c5fdbee5044a806f2fb2c6cea9cfee5beae8f8c530b70" Mar 13 11:00:45.269689 master-0 kubenswrapper[33013]: I0313 11:00:45.269248 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 11:00:48.777534 master-0 kubenswrapper[33013]: E0313 11:00:48.777473 33013 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7mk1tpvcusf46: secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:00:48.794045 master-0 kubenswrapper[33013]: E0313 11:00:48.777553 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 11:00:49.277532631 +0000 UTC m=+232.753485980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:00:49.286381 master-0 kubenswrapper[33013]: E0313 11:00:49.286185 33013 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7mk1tpvcusf46: secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:00:49.286381 master-0 kubenswrapper[33013]: E0313 11:00:49.286287 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 11:00:50.286271842 +0000 UTC m=+233.762225191 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:00:49.311058 master-0 kubenswrapper[33013]: I0313 11:00:49.310998 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_6aa84d96c35221e650d254cec915ee90/kube-controller-manager/1.log" Mar 13 11:00:49.312456 master-0 kubenswrapper[33013]: I0313 11:00:49.312413 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_6aa84d96c35221e650d254cec915ee90/kube-controller-manager/0.log" Mar 13 11:00:49.312541 master-0 kubenswrapper[33013]: I0313 11:00:49.312470 33013 generic.go:334] "Generic (PLEG): container finished" podID="6aa84d96c35221e650d254cec915ee90" containerID="4bc2563e4687b16637c0689e693842fbd842df192a43d7ee20a7af39f977383e" exitCode=137 Mar 13 11:00:49.312541 master-0 kubenswrapper[33013]: I0313 11:00:49.312509 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6aa84d96c35221e650d254cec915ee90","Type":"ContainerDied","Data":"4bc2563e4687b16637c0689e693842fbd842df192a43d7ee20a7af39f977383e"} Mar 13 11:00:49.312664 master-0 kubenswrapper[33013]: I0313 11:00:49.312559 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6aa84d96c35221e650d254cec915ee90","Type":"ContainerStarted","Data":"f64a190ab6bfcd5d71dd09d08481400c5646db74ded1e7ad4ac16e4a9b0b9632"} Mar 13 11:00:49.312664 master-0 kubenswrapper[33013]: I0313 11:00:49.312602 33013 scope.go:117] "RemoveContainer" containerID="b13c60bcec66207b3ea2a744a2bea2122f3896924902c67d892d48a026ec7cde" Mar 13 11:00:50.302273 master-0 kubenswrapper[33013]: E0313 11:00:50.302185 33013 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7mk1tpvcusf46: secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:00:50.303302 master-0 kubenswrapper[33013]: E0313 11:00:50.302338 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 11:00:52.302306785 +0000 UTC m=+235.778260144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:00:50.323933 master-0 kubenswrapper[33013]: I0313 11:00:50.323868 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_6aa84d96c35221e650d254cec915ee90/kube-controller-manager/1.log" Mar 13 11:00:50.475570 master-0 kubenswrapper[33013]: I0313 11:00:50.475440 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:00:52.341724 master-0 kubenswrapper[33013]: E0313 11:00:52.341675 33013 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7mk1tpvcusf46: secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:00:52.342829 master-0 kubenswrapper[33013]: E0313 11:00:52.342762 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 11:00:56.342739067 +0000 UTC m=+239.818692416 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:00:53.990738 master-0 kubenswrapper[33013]: I0313 11:00:53.990672 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 11:00:55.288210 master-0 kubenswrapper[33013]: I0313 11:00:55.288145 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 13 11:00:56.406786 master-0 kubenswrapper[33013]: E0313 11:00:56.406735 33013 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7mk1tpvcusf46: secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:00:56.407357 master-0 kubenswrapper[33013]: E0313 11:00:56.406830 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 11:01:04.406811697 +0000 UTC m=+247.882765056 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:00:58.563614 master-0 kubenswrapper[33013]: I0313 11:00:58.563490 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:00:58.569896 master-0 kubenswrapper[33013]: I0313 11:00:58.569847 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:00:59.401567 master-0 kubenswrapper[33013]: I0313 11:00:59.401484 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:01:04.425338 master-0 kubenswrapper[33013]: E0313 11:01:04.425296 33013 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7mk1tpvcusf46: secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:01:04.426003 master-0 kubenswrapper[33013]: E0313 11:01:04.425372 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 11:01:20.425354479 +0000 UTC m=+263.901307828 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:01:07.348794 master-0 kubenswrapper[33013]: I0313 11:01:07.348460 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-mnqcc"] Mar 13 11:01:07.348794 master-0 kubenswrapper[33013]: E0313 11:01:07.348769 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" Mar 13 11:01:07.348794 master-0 kubenswrapper[33013]: I0313 11:01:07.348782 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" Mar 13 11:01:07.349532 master-0 kubenswrapper[33013]: E0313 11:01:07.348815 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" containerName="installer" Mar 13 11:01:07.349532 master-0 kubenswrapper[33013]: I0313 11:01:07.348822 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" containerName="installer" Mar 13 11:01:07.349532 master-0 kubenswrapper[33013]: I0313 11:01:07.348934 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd6c4b8c-418d-4f69-9a1f-ebd0ee56daec" containerName="installer" Mar 13 11:01:07.349532 master-0 kubenswrapper[33013]: I0313 11:01:07.348970 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" Mar 13 11:01:07.349532 master-0 kubenswrapper[33013]: I0313 11:01:07.349406 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-mnqcc" Mar 13 11:01:07.354837 master-0 kubenswrapper[33013]: I0313 11:01:07.354790 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-7lxxf" Mar 13 11:01:07.355055 master-0 kubenswrapper[33013]: I0313 11:01:07.355036 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 13 11:01:07.402748 master-0 kubenswrapper[33013]: I0313 11:01:07.402652 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-68597ccc5b-xrb8c"] Mar 13 11:01:07.403020 master-0 kubenswrapper[33013]: I0313 11:01:07.402945 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" podUID="b68ed803-45e2-42f1-99b1-33cf59b01d74" containerName="metrics-server" containerID="cri-o://a53ccb10d38781462661d28f14cee8ad4f8374b8664112cbbcf7c91c9615f04e" gracePeriod=170 Mar 13 11:01:07.414940 master-0 kubenswrapper[33013]: I0313 11:01:07.414892 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-dc44494b5-hphsz"] Mar 13 11:01:07.477471 master-0 kubenswrapper[33013]: I0313 11:01:07.477180 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/94b34590-360b-4413-b43a-824574a7b35e-serviceca\") pod \"node-ca-mnqcc\" (UID: \"94b34590-360b-4413-b43a-824574a7b35e\") " pod="openshift-image-registry/node-ca-mnqcc" Mar 13 11:01:07.477471 master-0 kubenswrapper[33013]: I0313 11:01:07.477296 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j95k4\" (UniqueName: \"kubernetes.io/projected/94b34590-360b-4413-b43a-824574a7b35e-kube-api-access-j95k4\") pod \"node-ca-mnqcc\" (UID: \"94b34590-360b-4413-b43a-824574a7b35e\") " pod="openshift-image-registry/node-ca-mnqcc" Mar 13 11:01:07.477471 master-0 kubenswrapper[33013]: I0313 11:01:07.477340 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/94b34590-360b-4413-b43a-824574a7b35e-host\") pod \"node-ca-mnqcc\" (UID: \"94b34590-360b-4413-b43a-824574a7b35e\") " pod="openshift-image-registry/node-ca-mnqcc" Mar 13 11:01:07.483300 master-0 kubenswrapper[33013]: I0313 11:01:07.483257 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-57bfbb88b4-c4p4m"] Mar 13 11:01:07.485302 master-0 kubenswrapper[33013]: I0313 11:01:07.485265 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.488522 master-0 kubenswrapper[33013]: I0313 11:01:07.488438 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-4t7fa0cu8pren" Mar 13 11:01:07.504997 master-0 kubenswrapper[33013]: I0313 11:01:07.504938 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-666885cccb-4sjng"] Mar 13 11:01:07.515676 master-0 kubenswrapper[33013]: I0313 11:01:07.511371 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.515676 master-0 kubenswrapper[33013]: I0313 11:01:07.513313 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-57bfbb88b4-c4p4m"] Mar 13 11:01:07.515676 master-0 kubenswrapper[33013]: I0313 11:01:07.514576 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 13 11:01:07.515676 master-0 kubenswrapper[33013]: I0313 11:01:07.514655 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 13 11:01:07.515676 master-0 kubenswrapper[33013]: I0313 11:01:07.514813 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 13 11:01:07.515676 master-0 kubenswrapper[33013]: I0313 11:01:07.515001 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-pric2e185j16" Mar 13 11:01:07.515676 master-0 kubenswrapper[33013]: I0313 11:01:07.515113 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 13 11:01:07.515676 master-0 kubenswrapper[33013]: I0313 11:01:07.515220 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 13 11:01:07.520133 master-0 kubenswrapper[33013]: I0313 11:01:07.520101 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-666885cccb-4sjng"] Mar 13 11:01:07.578606 master-0 kubenswrapper[33013]: I0313 11:01:07.578399 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j95k4\" (UniqueName: \"kubernetes.io/projected/94b34590-360b-4413-b43a-824574a7b35e-kube-api-access-j95k4\") pod \"node-ca-mnqcc\" (UID: \"94b34590-360b-4413-b43a-824574a7b35e\") " pod="openshift-image-registry/node-ca-mnqcc" Mar 13 11:01:07.578606 master-0 kubenswrapper[33013]: I0313 11:01:07.578470 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a6ecec-d812-4692-817c-e556f28d2145-client-ca-bundle\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.578606 master-0 kubenswrapper[33013]: I0313 11:01:07.578503 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-grpc-tls\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.578606 master-0 kubenswrapper[33013]: I0313 11:01:07.578536 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fn4g\" (UniqueName: \"kubernetes.io/projected/b5a6ecec-d812-4692-817c-e556f28d2145-kube-api-access-5fn4g\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.578606 master-0 kubenswrapper[33013]: I0313 11:01:07.578562 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.578606 master-0 kubenswrapper[33013]: I0313 11:01:07.578610 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5a6ecec-d812-4692-817c-e556f28d2145-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.579169 master-0 kubenswrapper[33013]: I0313 11:01:07.578635 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/94b34590-360b-4413-b43a-824574a7b35e-host\") pod \"node-ca-mnqcc\" (UID: \"94b34590-360b-4413-b43a-824574a7b35e\") " pod="openshift-image-registry/node-ca-mnqcc" Mar 13 11:01:07.579169 master-0 kubenswrapper[33013]: I0313 11:01:07.578667 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.579169 master-0 kubenswrapper[33013]: I0313 11:01:07.578704 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.579169 master-0 kubenswrapper[33013]: I0313 11:01:07.578738 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-tls\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.579169 master-0 kubenswrapper[33013]: I0313 11:01:07.578765 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6t8t\" (UniqueName: \"kubernetes.io/projected/60f5017c-11bf-45b9-813d-40722e028b1c-kube-api-access-r6t8t\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.579169 master-0 kubenswrapper[33013]: I0313 11:01:07.578821 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b5a6ecec-d812-4692-817c-e556f28d2145-secret-metrics-client-certs\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.579169 master-0 kubenswrapper[33013]: I0313 11:01:07.578875 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/94b34590-360b-4413-b43a-824574a7b35e-host\") pod \"node-ca-mnqcc\" (UID: \"94b34590-360b-4413-b43a-824574a7b35e\") " pod="openshift-image-registry/node-ca-mnqcc" Mar 13 11:01:07.579169 master-0 kubenswrapper[33013]: I0313 11:01:07.578882 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b5a6ecec-d812-4692-817c-e556f28d2145-metrics-server-audit-profiles\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.579169 master-0 kubenswrapper[33013]: I0313 11:01:07.579019 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b5a6ecec-d812-4692-817c-e556f28d2145-audit-log\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.579169 master-0 kubenswrapper[33013]: I0313 11:01:07.579054 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.579169 master-0 kubenswrapper[33013]: I0313 11:01:07.579094 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b5a6ecec-d812-4692-817c-e556f28d2145-secret-metrics-server-tls\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.579169 master-0 kubenswrapper[33013]: I0313 11:01:07.579123 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/60f5017c-11bf-45b9-813d-40722e028b1c-metrics-client-ca\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.579169 master-0 kubenswrapper[33013]: I0313 11:01:07.579149 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/94b34590-360b-4413-b43a-824574a7b35e-serviceca\") pod \"node-ca-mnqcc\" (UID: \"94b34590-360b-4413-b43a-824574a7b35e\") " pod="openshift-image-registry/node-ca-mnqcc" Mar 13 11:01:07.580018 master-0 kubenswrapper[33013]: I0313 11:01:07.579972 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/94b34590-360b-4413-b43a-824574a7b35e-serviceca\") pod \"node-ca-mnqcc\" (UID: \"94b34590-360b-4413-b43a-824574a7b35e\") " pod="openshift-image-registry/node-ca-mnqcc" Mar 13 11:01:07.602754 master-0 kubenswrapper[33013]: I0313 11:01:07.602629 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j95k4\" (UniqueName: \"kubernetes.io/projected/94b34590-360b-4413-b43a-824574a7b35e-kube-api-access-j95k4\") pod \"node-ca-mnqcc\" (UID: \"94b34590-360b-4413-b43a-824574a7b35e\") " pod="openshift-image-registry/node-ca-mnqcc" Mar 13 11:01:07.678396 master-0 kubenswrapper[33013]: I0313 11:01:07.678327 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-mnqcc" Mar 13 11:01:07.681010 master-0 kubenswrapper[33013]: I0313 11:01:07.680847 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a6ecec-d812-4692-817c-e556f28d2145-client-ca-bundle\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.681010 master-0 kubenswrapper[33013]: I0313 11:01:07.680879 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-grpc-tls\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.681010 master-0 kubenswrapper[33013]: I0313 11:01:07.680905 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fn4g\" (UniqueName: \"kubernetes.io/projected/b5a6ecec-d812-4692-817c-e556f28d2145-kube-api-access-5fn4g\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.681010 master-0 kubenswrapper[33013]: I0313 11:01:07.680924 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.681010 master-0 kubenswrapper[33013]: I0313 11:01:07.680947 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5a6ecec-d812-4692-817c-e556f28d2145-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.681010 master-0 kubenswrapper[33013]: I0313 11:01:07.680968 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.681821 master-0 kubenswrapper[33013]: I0313 11:01:07.681774 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.681931 master-0 kubenswrapper[33013]: I0313 11:01:07.681839 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-tls\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.681931 master-0 kubenswrapper[33013]: I0313 11:01:07.681860 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6t8t\" (UniqueName: \"kubernetes.io/projected/60f5017c-11bf-45b9-813d-40722e028b1c-kube-api-access-r6t8t\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.682018 master-0 kubenswrapper[33013]: I0313 11:01:07.681935 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b5a6ecec-d812-4692-817c-e556f28d2145-secret-metrics-client-certs\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.682018 master-0 kubenswrapper[33013]: I0313 11:01:07.681951 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b5a6ecec-d812-4692-817c-e556f28d2145-metrics-server-audit-profiles\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.682018 master-0 kubenswrapper[33013]: I0313 11:01:07.681999 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b5a6ecec-d812-4692-817c-e556f28d2145-audit-log\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.682018 master-0 kubenswrapper[33013]: I0313 11:01:07.682020 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.682204 master-0 kubenswrapper[33013]: I0313 11:01:07.682056 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b5a6ecec-d812-4692-817c-e556f28d2145-secret-metrics-server-tls\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.682204 master-0 kubenswrapper[33013]: I0313 11:01:07.682078 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/60f5017c-11bf-45b9-813d-40722e028b1c-metrics-client-ca\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.682749 master-0 kubenswrapper[33013]: I0313 11:01:07.682694 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5a6ecec-d812-4692-817c-e556f28d2145-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.683083 master-0 kubenswrapper[33013]: I0313 11:01:07.683046 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/60f5017c-11bf-45b9-813d-40722e028b1c-metrics-client-ca\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.684084 master-0 kubenswrapper[33013]: I0313 11:01:07.684047 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b5a6ecec-d812-4692-817c-e556f28d2145-metrics-server-audit-profiles\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.684280 master-0 kubenswrapper[33013]: I0313 11:01:07.684243 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-grpc-tls\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.684344 master-0 kubenswrapper[33013]: I0313 11:01:07.684307 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b5a6ecec-d812-4692-817c-e556f28d2145-audit-log\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.684344 master-0 kubenswrapper[33013]: I0313 11:01:07.684319 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.684450 master-0 kubenswrapper[33013]: I0313 11:01:07.684434 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a6ecec-d812-4692-817c-e556f28d2145-client-ca-bundle\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.685286 master-0 kubenswrapper[33013]: I0313 11:01:07.685255 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.685714 master-0 kubenswrapper[33013]: I0313 11:01:07.685686 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b5a6ecec-d812-4692-817c-e556f28d2145-secret-metrics-client-certs\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.688412 master-0 kubenswrapper[33013]: I0313 11:01:07.687320 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b5a6ecec-d812-4692-817c-e556f28d2145-secret-metrics-server-tls\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.688412 master-0 kubenswrapper[33013]: I0313 11:01:07.687955 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.697352 master-0 kubenswrapper[33013]: I0313 11:01:07.697317 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.701665 master-0 kubenswrapper[33013]: I0313 11:01:07.697691 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/60f5017c-11bf-45b9-813d-40722e028b1c-secret-thanos-querier-tls\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.701665 master-0 kubenswrapper[33013]: I0313 11:01:07.699607 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fn4g\" (UniqueName: \"kubernetes.io/projected/b5a6ecec-d812-4692-817c-e556f28d2145-kube-api-access-5fn4g\") pod \"metrics-server-57bfbb88b4-c4p4m\" (UID: \"b5a6ecec-d812-4692-817c-e556f28d2145\") " pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.707438 master-0 kubenswrapper[33013]: I0313 11:01:07.707376 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6t8t\" (UniqueName: \"kubernetes.io/projected/60f5017c-11bf-45b9-813d-40722e028b1c-kube-api-access-r6t8t\") pod \"thanos-querier-666885cccb-4sjng\" (UID: \"60f5017c-11bf-45b9-813d-40722e028b1c\") " pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:07.811555 master-0 kubenswrapper[33013]: I0313 11:01:07.811481 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:07.837012 master-0 kubenswrapper[33013]: I0313 11:01:07.836977 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:08.221524 master-0 kubenswrapper[33013]: I0313 11:01:08.221446 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-57bfbb88b4-c4p4m"] Mar 13 11:01:08.221952 master-0 kubenswrapper[33013]: W0313 11:01:08.221866 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5a6ecec_d812_4692_817c_e556f28d2145.slice/crio-95ee73ce3dd41650f3ba736d206b5902b77d62f6e8c7ade6ed853e0131cf263c WatchSource:0}: Error finding container 95ee73ce3dd41650f3ba736d206b5902b77d62f6e8c7ade6ed853e0131cf263c: Status 404 returned error can't find the container with id 95ee73ce3dd41650f3ba736d206b5902b77d62f6e8c7ade6ed853e0131cf263c Mar 13 11:01:08.289838 master-0 kubenswrapper[33013]: I0313 11:01:08.289768 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-666885cccb-4sjng"] Mar 13 11:01:08.292977 master-0 kubenswrapper[33013]: W0313 11:01:08.292934 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60f5017c_11bf_45b9_813d_40722e028b1c.slice/crio-b7894f4a979ba2a6d2d599e238cb3e408c76fccd17b3fceec842d55c3c223343 WatchSource:0}: Error finding container b7894f4a979ba2a6d2d599e238cb3e408c76fccd17b3fceec842d55c3c223343: Status 404 returned error can't find the container with id b7894f4a979ba2a6d2d599e238cb3e408c76fccd17b3fceec842d55c3c223343 Mar 13 11:01:08.471761 master-0 kubenswrapper[33013]: I0313 11:01:08.471559 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-mnqcc" event={"ID":"94b34590-360b-4413-b43a-824574a7b35e","Type":"ContainerStarted","Data":"6d7b8ab0eb165d578c16d1ce2d159d1cedb4ee239b69c54cfe8e80f413cd82c7"} Mar 13 11:01:08.473977 master-0 kubenswrapper[33013]: I0313 11:01:08.473905 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" event={"ID":"60f5017c-11bf-45b9-813d-40722e028b1c","Type":"ContainerStarted","Data":"b7894f4a979ba2a6d2d599e238cb3e408c76fccd17b3fceec842d55c3c223343"} Mar 13 11:01:08.475939 master-0 kubenswrapper[33013]: I0313 11:01:08.475896 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" event={"ID":"b5a6ecec-d812-4692-817c-e556f28d2145","Type":"ContainerStarted","Data":"7b7d2c6a547537acee45d580b01da3ac0313dc6e233f0032c490b3d985caaabe"} Mar 13 11:01:08.476022 master-0 kubenswrapper[33013]: I0313 11:01:08.475943 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" event={"ID":"b5a6ecec-d812-4692-817c-e556f28d2145","Type":"ContainerStarted","Data":"95ee73ce3dd41650f3ba736d206b5902b77d62f6e8c7ade6ed853e0131cf263c"} Mar 13 11:01:08.495213 master-0 kubenswrapper[33013]: I0313 11:01:08.495147 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" podStartSLOduration=1.49513061 podStartE2EDuration="1.49513061s" podCreationTimestamp="2026-03-13 11:01:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:01:08.494124992 +0000 UTC m=+251.970078341" watchObservedRunningTime="2026-03-13 11:01:08.49513061 +0000 UTC m=+251.971083959" Mar 13 11:01:11.510078 master-0 kubenswrapper[33013]: I0313 11:01:11.510008 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" event={"ID":"60f5017c-11bf-45b9-813d-40722e028b1c","Type":"ContainerStarted","Data":"46ecb5627057c02fd2caf5d5c494ac0676375900dd9d44a985a0839b45a244dc"} Mar 13 11:01:11.510078 master-0 kubenswrapper[33013]: I0313 11:01:11.510066 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" event={"ID":"60f5017c-11bf-45b9-813d-40722e028b1c","Type":"ContainerStarted","Data":"4856f6a4e045d34de7f3b11a8b07c55ff9e068f9926f9f8fbca2c18540cf6cc7"} Mar 13 11:01:11.510078 master-0 kubenswrapper[33013]: I0313 11:01:11.510078 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" event={"ID":"60f5017c-11bf-45b9-813d-40722e028b1c","Type":"ContainerStarted","Data":"8a2ca399320864fc02c95cb84e19a608318732c7165ba02751482c1d8cd30f87"} Mar 13 11:01:11.511673 master-0 kubenswrapper[33013]: I0313 11:01:11.511621 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-mnqcc" event={"ID":"94b34590-360b-4413-b43a-824574a7b35e","Type":"ContainerStarted","Data":"d0ef56296298463a9766bf764c1804eb746d10e34dc31b41b97678125962e678"} Mar 13 11:01:11.526742 master-0 kubenswrapper[33013]: I0313 11:01:11.526654 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-mnqcc" podStartSLOduration=1.328487022 podStartE2EDuration="4.52663442s" podCreationTimestamp="2026-03-13 11:01:07 +0000 UTC" firstStartedPulling="2026-03-13 11:01:07.734315487 +0000 UTC m=+251.210268836" lastFinishedPulling="2026-03-13 11:01:10.932462885 +0000 UTC m=+254.408416234" observedRunningTime="2026-03-13 11:01:11.525801797 +0000 UTC m=+255.001755146" watchObservedRunningTime="2026-03-13 11:01:11.52663442 +0000 UTC m=+255.002587769" Mar 13 11:01:13.537156 master-0 kubenswrapper[33013]: I0313 11:01:13.537079 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" event={"ID":"60f5017c-11bf-45b9-813d-40722e028b1c","Type":"ContainerStarted","Data":"4175d784f5b33f0c559d4e7ac0473897af9b3e5f94c2b67ce7708831745cf943"} Mar 13 11:01:13.537156 master-0 kubenswrapper[33013]: I0313 11:01:13.537144 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" event={"ID":"60f5017c-11bf-45b9-813d-40722e028b1c","Type":"ContainerStarted","Data":"91a4b5e7381d056200509cd3ebad60cf1a9e91badfe5623cd28b00dce8c9f546"} Mar 13 11:01:13.537156 master-0 kubenswrapper[33013]: I0313 11:01:13.537162 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" event={"ID":"60f5017c-11bf-45b9-813d-40722e028b1c","Type":"ContainerStarted","Data":"c39bc6d4c0579194c4c6e1e471d10b3dd5aaa19057a56884b3caa2e0f7752724"} Mar 13 11:01:13.537885 master-0 kubenswrapper[33013]: I0313 11:01:13.537520 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:13.580875 master-0 kubenswrapper[33013]: I0313 11:01:13.580608 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" podStartSLOduration=2.488244327 podStartE2EDuration="6.580551834s" podCreationTimestamp="2026-03-13 11:01:07 +0000 UTC" firstStartedPulling="2026-03-13 11:01:08.295248944 +0000 UTC m=+251.771202293" lastFinishedPulling="2026-03-13 11:01:12.387556461 +0000 UTC m=+255.863509800" observedRunningTime="2026-03-13 11:01:13.574881693 +0000 UTC m=+257.050835042" watchObservedRunningTime="2026-03-13 11:01:13.580551834 +0000 UTC m=+257.056505203" Mar 13 11:01:17.845612 master-0 kubenswrapper[33013]: I0313 11:01:17.845530 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-666885cccb-4sjng" Mar 13 11:01:20.526504 master-0 kubenswrapper[33013]: E0313 11:01:20.525742 33013 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7mk1tpvcusf46: secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:01:20.526504 master-0 kubenswrapper[33013]: E0313 11:01:20.525830 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 11:01:52.525816647 +0000 UTC m=+296.001769996 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:01:27.812246 master-0 kubenswrapper[33013]: I0313 11:01:27.812148 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:27.812246 master-0 kubenswrapper[33013]: I0313 11:01:27.812246 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:28.830147 master-0 kubenswrapper[33013]: I0313 11:01:28.830079 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 11:01:28.832649 master-0 kubenswrapper[33013]: I0313 11:01:28.832609 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.837035 master-0 kubenswrapper[33013]: I0313 11:01:28.835509 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 13 11:01:28.837035 master-0 kubenswrapper[33013]: I0313 11:01:28.835513 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 13 11:01:28.837035 master-0 kubenswrapper[33013]: I0313 11:01:28.835742 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 13 11:01:28.837035 master-0 kubenswrapper[33013]: I0313 11:01:28.835923 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 13 11:01:28.837035 master-0 kubenswrapper[33013]: I0313 11:01:28.836030 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 13 11:01:28.837035 master-0 kubenswrapper[33013]: I0313 11:01:28.836099 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 13 11:01:28.837035 master-0 kubenswrapper[33013]: I0313 11:01:28.836647 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-3i0ec0sk02irt" Mar 13 11:01:28.837376 master-0 kubenswrapper[33013]: I0313 11:01:28.837250 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 13 11:01:28.840362 master-0 kubenswrapper[33013]: I0313 11:01:28.839694 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 13 11:01:28.840362 master-0 kubenswrapper[33013]: I0313 11:01:28.839964 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 13 11:01:28.843079 master-0 kubenswrapper[33013]: I0313 11:01:28.842185 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 13 11:01:28.845703 master-0 kubenswrapper[33013]: I0313 11:01:28.845232 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 13 11:01:28.850101 master-0 kubenswrapper[33013]: I0313 11:01:28.850056 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d648a041-61d2-4e6b-aa52-d5951ad4edf1-config-out\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850101 master-0 kubenswrapper[33013]: I0313 11:01:28.850101 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-web-config\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850300 master-0 kubenswrapper[33013]: I0313 11:01:28.850129 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d648a041-61d2-4e6b-aa52-d5951ad4edf1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850300 master-0 kubenswrapper[33013]: I0313 11:01:28.850153 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850300 master-0 kubenswrapper[33013]: I0313 11:01:28.850261 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850300 master-0 kubenswrapper[33013]: I0313 11:01:28.850288 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850506 master-0 kubenswrapper[33013]: I0313 11:01:28.850304 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d648a041-61d2-4e6b-aa52-d5951ad4edf1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850506 master-0 kubenswrapper[33013]: I0313 11:01:28.850321 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850506 master-0 kubenswrapper[33013]: I0313 11:01:28.850343 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850506 master-0 kubenswrapper[33013]: I0313 11:01:28.850470 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850653 master-0 kubenswrapper[33013]: I0313 11:01:28.850517 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850653 master-0 kubenswrapper[33013]: I0313 11:01:28.850538 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhrmj\" (UniqueName: \"kubernetes.io/projected/d648a041-61d2-4e6b-aa52-d5951ad4edf1-kube-api-access-jhrmj\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850653 master-0 kubenswrapper[33013]: I0313 11:01:28.850565 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850653 master-0 kubenswrapper[33013]: I0313 11:01:28.850624 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850769 master-0 kubenswrapper[33013]: I0313 11:01:28.850677 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850769 master-0 kubenswrapper[33013]: I0313 11:01:28.850706 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850769 master-0 kubenswrapper[33013]: I0313 11:01:28.850724 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-config\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.850769 master-0 kubenswrapper[33013]: I0313 11:01:28.850743 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.870642 master-0 kubenswrapper[33013]: I0313 11:01:28.866816 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 11:01:28.951649 master-0 kubenswrapper[33013]: I0313 11:01:28.951597 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.951922 master-0 kubenswrapper[33013]: I0313 11:01:28.951899 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952010 master-0 kubenswrapper[33013]: I0313 11:01:28.951997 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-config\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952097 master-0 kubenswrapper[33013]: I0313 11:01:28.952084 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952364 master-0 kubenswrapper[33013]: I0313 11:01:28.952302 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d648a041-61d2-4e6b-aa52-d5951ad4edf1-config-out\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952436 master-0 kubenswrapper[33013]: I0313 11:01:28.952367 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-web-config\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952436 master-0 kubenswrapper[33013]: I0313 11:01:28.952402 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d648a041-61d2-4e6b-aa52-d5951ad4edf1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952436 master-0 kubenswrapper[33013]: I0313 11:01:28.952429 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952596 master-0 kubenswrapper[33013]: I0313 11:01:28.952537 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952596 master-0 kubenswrapper[33013]: I0313 11:01:28.952563 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952690 master-0 kubenswrapper[33013]: I0313 11:01:28.952630 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d648a041-61d2-4e6b-aa52-d5951ad4edf1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952690 master-0 kubenswrapper[33013]: I0313 11:01:28.952654 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952690 master-0 kubenswrapper[33013]: I0313 11:01:28.952675 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952818 master-0 kubenswrapper[33013]: I0313 11:01:28.952751 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952818 master-0 kubenswrapper[33013]: I0313 11:01:28.952770 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952818 master-0 kubenswrapper[33013]: I0313 11:01:28.952797 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhrmj\" (UniqueName: \"kubernetes.io/projected/d648a041-61d2-4e6b-aa52-d5951ad4edf1-kube-api-access-jhrmj\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952955 master-0 kubenswrapper[33013]: I0313 11:01:28.952831 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.952955 master-0 kubenswrapper[33013]: I0313 11:01:28.952897 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.953047 master-0 kubenswrapper[33013]: I0313 11:01:28.952990 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.953101 master-0 kubenswrapper[33013]: E0313 11:01:28.953090 33013 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 11:01:28.953170 master-0 kubenswrapper[33013]: E0313 11:01:28.953148 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls podName:d648a041-61d2-4e6b-aa52-d5951ad4edf1 nodeName:}" failed. No retries permitted until 2026-03-13 11:01:29.453129197 +0000 UTC m=+272.929082546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "d648a041-61d2-4e6b-aa52-d5951ad4edf1") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 11:01:28.953307 master-0 kubenswrapper[33013]: I0313 11:01:28.953267 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/d648a041-61d2-4e6b-aa52-d5951ad4edf1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.953449 master-0 kubenswrapper[33013]: E0313 11:01:28.953416 33013 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 13 11:01:28.953514 master-0 kubenswrapper[33013]: E0313 11:01:28.953449 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls podName:d648a041-61d2-4e6b-aa52-d5951ad4edf1 nodeName:}" failed. No retries permitted until 2026-03-13 11:01:29.453441576 +0000 UTC m=+272.929394925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "d648a041-61d2-4e6b-aa52-d5951ad4edf1") : secret "prometheus-k8s-tls" not found Mar 13 11:01:28.954483 master-0 kubenswrapper[33013]: I0313 11:01:28.954457 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.955269 master-0 kubenswrapper[33013]: I0313 11:01:28.955236 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.955343 master-0 kubenswrapper[33013]: I0313 11:01:28.955247 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.956565 master-0 kubenswrapper[33013]: I0313 11:01:28.956522 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.956998 master-0 kubenswrapper[33013]: I0313 11:01:28.956971 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d648a041-61d2-4e6b-aa52-d5951ad4edf1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.958385 master-0 kubenswrapper[33013]: I0313 11:01:28.958332 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.958707 master-0 kubenswrapper[33013]: I0313 11:01:28.958633 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d648a041-61d2-4e6b-aa52-d5951ad4edf1-config-out\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.958859 master-0 kubenswrapper[33013]: I0313 11:01:28.958833 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-web-config\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.959089 master-0 kubenswrapper[33013]: I0313 11:01:28.959048 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.959135 master-0 kubenswrapper[33013]: I0313 11:01:28.959086 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.960427 master-0 kubenswrapper[33013]: I0313 11:01:28.960396 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-config\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.962395 master-0 kubenswrapper[33013]: I0313 11:01:28.962366 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.964008 master-0 kubenswrapper[33013]: I0313 11:01:28.963969 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d648a041-61d2-4e6b-aa52-d5951ad4edf1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:28.976705 master-0 kubenswrapper[33013]: I0313 11:01:28.976615 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhrmj\" (UniqueName: \"kubernetes.io/projected/d648a041-61d2-4e6b-aa52-d5951ad4edf1-kube-api-access-jhrmj\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:29.460887 master-0 kubenswrapper[33013]: I0313 11:01:29.460817 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:29.461173 master-0 kubenswrapper[33013]: I0313 11:01:29.460913 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:29.461173 master-0 kubenswrapper[33013]: E0313 11:01:29.461081 33013 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 13 11:01:29.461258 master-0 kubenswrapper[33013]: E0313 11:01:29.461099 33013 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 11:01:29.461293 master-0 kubenswrapper[33013]: E0313 11:01:29.461193 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls podName:d648a041-61d2-4e6b-aa52-d5951ad4edf1 nodeName:}" failed. No retries permitted until 2026-03-13 11:01:30.46117122 +0000 UTC m=+273.937124569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "d648a041-61d2-4e6b-aa52-d5951ad4edf1") : secret "prometheus-k8s-tls" not found Mar 13 11:01:29.461340 master-0 kubenswrapper[33013]: E0313 11:01:29.461306 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls podName:d648a041-61d2-4e6b-aa52-d5951ad4edf1 nodeName:}" failed. No retries permitted until 2026-03-13 11:01:30.461273632 +0000 UTC m=+273.937227101 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "d648a041-61d2-4e6b-aa52-d5951ad4edf1") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 11:01:30.475241 master-0 kubenswrapper[33013]: I0313 11:01:30.475156 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:30.476074 master-0 kubenswrapper[33013]: I0313 11:01:30.475296 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:30.476074 master-0 kubenswrapper[33013]: E0313 11:01:30.475331 33013 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 13 11:01:30.476074 master-0 kubenswrapper[33013]: E0313 11:01:30.475458 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls podName:d648a041-61d2-4e6b-aa52-d5951ad4edf1 nodeName:}" failed. No retries permitted until 2026-03-13 11:01:32.475430923 +0000 UTC m=+275.951384292 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "d648a041-61d2-4e6b-aa52-d5951ad4edf1") : secret "prometheus-k8s-tls" not found Mar 13 11:01:30.476074 master-0 kubenswrapper[33013]: E0313 11:01:30.475464 33013 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 11:01:30.476074 master-0 kubenswrapper[33013]: E0313 11:01:30.475548 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls podName:d648a041-61d2-4e6b-aa52-d5951ad4edf1 nodeName:}" failed. No retries permitted until 2026-03-13 11:01:32.475522725 +0000 UTC m=+275.951476224 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "d648a041-61d2-4e6b-aa52-d5951ad4edf1") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 11:01:32.504802 master-0 kubenswrapper[33013]: I0313 11:01:32.504705 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:32.505698 master-0 kubenswrapper[33013]: I0313 11:01:32.504880 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:32.505698 master-0 kubenswrapper[33013]: E0313 11:01:32.505239 33013 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 11:01:32.505698 master-0 kubenswrapper[33013]: E0313 11:01:32.505328 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls podName:d648a041-61d2-4e6b-aa52-d5951ad4edf1 nodeName:}" failed. No retries permitted until 2026-03-13 11:01:36.505302696 +0000 UTC m=+279.981256085 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "d648a041-61d2-4e6b-aa52-d5951ad4edf1") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 11:01:32.506156 master-0 kubenswrapper[33013]: I0313 11:01:32.506053 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-dc44494b5-hphsz" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" containerID="cri-o://4a18dd10aa2bd5ad5912ef46ab2e43f10b398b974a4eef2d39a9131f286e217f" gracePeriod=15 Mar 13 11:01:32.506251 master-0 kubenswrapper[33013]: E0313 11:01:32.506107 33013 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 13 11:01:32.506332 master-0 kubenswrapper[33013]: E0313 11:01:32.506299 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls podName:d648a041-61d2-4e6b-aa52-d5951ad4edf1 nodeName:}" failed. No retries permitted until 2026-03-13 11:01:36.506277694 +0000 UTC m=+279.982231033 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "d648a041-61d2-4e6b-aa52-d5951ad4edf1") : secret "prometheus-k8s-tls" not found Mar 13 11:01:32.674703 master-0 kubenswrapper[33013]: I0313 11:01:32.674642 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-dc44494b5-hphsz_fbe2e6b6-cd6e-490b-b89e-ed78463012e3/console/0.log" Mar 13 11:01:32.674925 master-0 kubenswrapper[33013]: I0313 11:01:32.674725 33013 generic.go:334] "Generic (PLEG): container finished" podID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerID="4a18dd10aa2bd5ad5912ef46ab2e43f10b398b974a4eef2d39a9131f286e217f" exitCode=2 Mar 13 11:01:32.674925 master-0 kubenswrapper[33013]: I0313 11:01:32.674767 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-dc44494b5-hphsz" event={"ID":"fbe2e6b6-cd6e-490b-b89e-ed78463012e3","Type":"ContainerDied","Data":"4a18dd10aa2bd5ad5912ef46ab2e43f10b398b974a4eef2d39a9131f286e217f"} Mar 13 11:01:32.957372 master-0 kubenswrapper[33013]: I0313 11:01:32.957320 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-dc44494b5-hphsz_fbe2e6b6-cd6e-490b-b89e-ed78463012e3/console/0.log" Mar 13 11:01:32.957687 master-0 kubenswrapper[33013]: I0313 11:01:32.957411 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-dc44494b5-hphsz" Mar 13 11:01:33.014682 master-0 kubenswrapper[33013]: I0313 11:01:33.014620 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-oauth-serving-cert\") pod \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " Mar 13 11:01:33.014988 master-0 kubenswrapper[33013]: I0313 11:01:33.014697 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-serving-cert\") pod \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " Mar 13 11:01:33.014988 master-0 kubenswrapper[33013]: I0313 11:01:33.014747 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-service-ca\") pod \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " Mar 13 11:01:33.014988 master-0 kubenswrapper[33013]: I0313 11:01:33.014801 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-oauth-config\") pod \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " Mar 13 11:01:33.014988 master-0 kubenswrapper[33013]: I0313 11:01:33.014846 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-config\") pod \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " Mar 13 11:01:33.014988 master-0 kubenswrapper[33013]: I0313 11:01:33.014921 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-trusted-ca-bundle\") pod \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " Mar 13 11:01:33.014988 master-0 kubenswrapper[33013]: I0313 11:01:33.014970 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mchd\" (UniqueName: \"kubernetes.io/projected/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-kube-api-access-4mchd\") pod \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\" (UID: \"fbe2e6b6-cd6e-490b-b89e-ed78463012e3\") " Mar 13 11:01:33.016099 master-0 kubenswrapper[33013]: I0313 11:01:33.016039 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-config" (OuterVolumeSpecName: "console-config") pod "fbe2e6b6-cd6e-490b-b89e-ed78463012e3" (UID: "fbe2e6b6-cd6e-490b-b89e-ed78463012e3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:01:33.016221 master-0 kubenswrapper[33013]: I0313 11:01:33.016064 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "fbe2e6b6-cd6e-490b-b89e-ed78463012e3" (UID: "fbe2e6b6-cd6e-490b-b89e-ed78463012e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:01:33.016278 master-0 kubenswrapper[33013]: I0313 11:01:33.016256 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "fbe2e6b6-cd6e-490b-b89e-ed78463012e3" (UID: "fbe2e6b6-cd6e-490b-b89e-ed78463012e3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:01:33.016491 master-0 kubenswrapper[33013]: I0313 11:01:33.016458 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "fbe2e6b6-cd6e-490b-b89e-ed78463012e3" (UID: "fbe2e6b6-cd6e-490b-b89e-ed78463012e3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:01:33.018333 master-0 kubenswrapper[33013]: I0313 11:01:33.018271 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "fbe2e6b6-cd6e-490b-b89e-ed78463012e3" (UID: "fbe2e6b6-cd6e-490b-b89e-ed78463012e3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:01:33.018425 master-0 kubenswrapper[33013]: I0313 11:01:33.018300 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "fbe2e6b6-cd6e-490b-b89e-ed78463012e3" (UID: "fbe2e6b6-cd6e-490b-b89e-ed78463012e3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:01:33.019972 master-0 kubenswrapper[33013]: I0313 11:01:33.019889 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-kube-api-access-4mchd" (OuterVolumeSpecName: "kube-api-access-4mchd") pod "fbe2e6b6-cd6e-490b-b89e-ed78463012e3" (UID: "fbe2e6b6-cd6e-490b-b89e-ed78463012e3"). InnerVolumeSpecName "kube-api-access-4mchd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:01:33.117396 master-0 kubenswrapper[33013]: I0313 11:01:33.117352 33013 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 11:01:33.117663 master-0 kubenswrapper[33013]: I0313 11:01:33.117649 33013 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:01:33.117728 master-0 kubenswrapper[33013]: I0313 11:01:33.117719 33013 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:01:33.117796 master-0 kubenswrapper[33013]: I0313 11:01:33.117785 33013 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:01:33.117863 master-0 kubenswrapper[33013]: I0313 11:01:33.117853 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mchd\" (UniqueName: \"kubernetes.io/projected/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-kube-api-access-4mchd\") on node \"master-0\" DevicePath \"\"" Mar 13 11:01:33.117926 master-0 kubenswrapper[33013]: I0313 11:01:33.117917 33013 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 11:01:33.117994 master-0 kubenswrapper[33013]: I0313 11:01:33.117983 33013 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fbe2e6b6-cd6e-490b-b89e-ed78463012e3-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 11:01:33.684128 master-0 kubenswrapper[33013]: I0313 11:01:33.684082 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-dc44494b5-hphsz_fbe2e6b6-cd6e-490b-b89e-ed78463012e3/console/0.log" Mar 13 11:01:33.684885 master-0 kubenswrapper[33013]: I0313 11:01:33.684862 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-dc44494b5-hphsz" event={"ID":"fbe2e6b6-cd6e-490b-b89e-ed78463012e3","Type":"ContainerDied","Data":"448b3e4908ad0d7658cd7f0a85b6e62c8bf7fd7d24209557f14656b451605f6b"} Mar 13 11:01:33.684995 master-0 kubenswrapper[33013]: I0313 11:01:33.684948 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-dc44494b5-hphsz" Mar 13 11:01:33.685128 master-0 kubenswrapper[33013]: I0313 11:01:33.684972 33013 scope.go:117] "RemoveContainer" containerID="4a18dd10aa2bd5ad5912ef46ab2e43f10b398b974a4eef2d39a9131f286e217f" Mar 13 11:01:33.730547 master-0 kubenswrapper[33013]: I0313 11:01:33.730450 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-dc44494b5-hphsz"] Mar 13 11:01:33.738703 master-0 kubenswrapper[33013]: I0313 11:01:33.738616 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-dc44494b5-hphsz"] Mar 13 11:01:34.721139 master-0 kubenswrapper[33013]: I0313 11:01:34.721056 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" path="/var/lib/kubelet/pods/fbe2e6b6-cd6e-490b-b89e-ed78463012e3/volumes" Mar 13 11:01:36.572270 master-0 kubenswrapper[33013]: I0313 11:01:36.572170 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:36.572883 master-0 kubenswrapper[33013]: E0313 11:01:36.572399 33013 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 13 11:01:36.572883 master-0 kubenswrapper[33013]: I0313 11:01:36.572429 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:36.572883 master-0 kubenswrapper[33013]: E0313 11:01:36.572504 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls podName:d648a041-61d2-4e6b-aa52-d5951ad4edf1 nodeName:}" failed. No retries permitted until 2026-03-13 11:01:44.572475783 +0000 UTC m=+288.048429132 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "d648a041-61d2-4e6b-aa52-d5951ad4edf1") : secret "prometheus-k8s-tls" not found Mar 13 11:01:36.572883 master-0 kubenswrapper[33013]: E0313 11:01:36.572580 33013 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 11:01:36.572883 master-0 kubenswrapper[33013]: E0313 11:01:36.572663 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls podName:d648a041-61d2-4e6b-aa52-d5951ad4edf1 nodeName:}" failed. No retries permitted until 2026-03-13 11:01:44.572644148 +0000 UTC m=+288.048597497 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "d648a041-61d2-4e6b-aa52-d5951ad4edf1") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 11:01:38.500137 master-0 kubenswrapper[33013]: I0313 11:01:38.500054 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 11:01:38.500803 master-0 kubenswrapper[33013]: E0313 11:01:38.500406 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" Mar 13 11:01:38.500803 master-0 kubenswrapper[33013]: I0313 11:01:38.500423 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" Mar 13 11:01:38.500803 master-0 kubenswrapper[33013]: I0313 11:01:38.500702 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe2e6b6-cd6e-490b-b89e-ed78463012e3" containerName="console" Mar 13 11:01:38.502558 master-0 kubenswrapper[33013]: I0313 11:01:38.502518 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.504677 master-0 kubenswrapper[33013]: I0313 11:01:38.504632 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 13 11:01:38.505108 master-0 kubenswrapper[33013]: I0313 11:01:38.505051 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 13 11:01:38.505108 master-0 kubenswrapper[33013]: I0313 11:01:38.505070 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 13 11:01:38.505393 master-0 kubenswrapper[33013]: I0313 11:01:38.505371 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 13 11:01:38.505511 master-0 kubenswrapper[33013]: I0313 11:01:38.505474 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 13 11:01:38.505656 master-0 kubenswrapper[33013]: I0313 11:01:38.505635 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 13 11:01:38.506005 master-0 kubenswrapper[33013]: I0313 11:01:38.505974 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 13 11:01:38.513881 master-0 kubenswrapper[33013]: I0313 11:01:38.513812 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 13 11:01:38.523496 master-0 kubenswrapper[33013]: I0313 11:01:38.523438 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 11:01:38.602347 master-0 kubenswrapper[33013]: I0313 11:01:38.602278 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e6646441-9498-4102-94c4-970c7ce534af-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.602626 master-0 kubenswrapper[33013]: I0313 11:01:38.602406 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6646441-9498-4102-94c4-970c7ce534af-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.602626 master-0 kubenswrapper[33013]: I0313 11:01:38.602471 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e6646441-9498-4102-94c4-970c7ce534af-config-out\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.602626 master-0 kubenswrapper[33013]: I0313 11:01:38.602549 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.602626 master-0 kubenswrapper[33013]: I0313 11:01:38.602600 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.602769 master-0 kubenswrapper[33013]: I0313 11:01:38.602726 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e6646441-9498-4102-94c4-970c7ce534af-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.602904 master-0 kubenswrapper[33013]: I0313 11:01:38.602842 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-config-volume\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.602904 master-0 kubenswrapper[33013]: I0313 11:01:38.602879 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.602972 master-0 kubenswrapper[33013]: I0313 11:01:38.602930 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e6646441-9498-4102-94c4-970c7ce534af-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.602972 master-0 kubenswrapper[33013]: I0313 11:01:38.602950 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.602972 master-0 kubenswrapper[33013]: I0313 11:01:38.602966 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpd6r\" (UniqueName: \"kubernetes.io/projected/e6646441-9498-4102-94c4-970c7ce534af-kube-api-access-xpd6r\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.603130 master-0 kubenswrapper[33013]: I0313 11:01:38.603014 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-web-config\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.704755 master-0 kubenswrapper[33013]: I0313 11:01:38.704684 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e6646441-9498-4102-94c4-970c7ce534af-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.705035 master-0 kubenswrapper[33013]: I0313 11:01:38.704902 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-config-volume\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.705035 master-0 kubenswrapper[33013]: I0313 11:01:38.704938 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.705035 master-0 kubenswrapper[33013]: I0313 11:01:38.704979 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e6646441-9498-4102-94c4-970c7ce534af-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.705141 master-0 kubenswrapper[33013]: I0313 11:01:38.705032 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.705289 master-0 kubenswrapper[33013]: I0313 11:01:38.705262 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpd6r\" (UniqueName: \"kubernetes.io/projected/e6646441-9498-4102-94c4-970c7ce534af-kube-api-access-xpd6r\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.705360 master-0 kubenswrapper[33013]: E0313 11:01:38.705290 33013 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 13 11:01:38.705437 master-0 kubenswrapper[33013]: E0313 11:01:38.705417 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls podName:e6646441-9498-4102-94c4-970c7ce534af nodeName:}" failed. No retries permitted until 2026-03-13 11:01:39.205397849 +0000 UTC m=+282.681351198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "e6646441-9498-4102-94c4-970c7ce534af") : secret "alertmanager-main-tls" not found Mar 13 11:01:38.705487 master-0 kubenswrapper[33013]: I0313 11:01:38.705297 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-web-config\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.706953 master-0 kubenswrapper[33013]: I0313 11:01:38.705659 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e6646441-9498-4102-94c4-970c7ce534af-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.706953 master-0 kubenswrapper[33013]: I0313 11:01:38.705810 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6646441-9498-4102-94c4-970c7ce534af-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.706953 master-0 kubenswrapper[33013]: I0313 11:01:38.705902 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e6646441-9498-4102-94c4-970c7ce534af-config-out\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.706953 master-0 kubenswrapper[33013]: I0313 11:01:38.705944 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e6646441-9498-4102-94c4-970c7ce534af-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.706953 master-0 kubenswrapper[33013]: I0313 11:01:38.705977 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e6646441-9498-4102-94c4-970c7ce534af-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.706953 master-0 kubenswrapper[33013]: I0313 11:01:38.706019 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.706953 master-0 kubenswrapper[33013]: I0313 11:01:38.706082 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.707191 master-0 kubenswrapper[33013]: I0313 11:01:38.707172 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6646441-9498-4102-94c4-970c7ce534af-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.710541 master-0 kubenswrapper[33013]: I0313 11:01:38.710494 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-web-config\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.710634 master-0 kubenswrapper[33013]: I0313 11:01:38.710506 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-config-volume\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.710782 master-0 kubenswrapper[33013]: I0313 11:01:38.710744 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.711016 master-0 kubenswrapper[33013]: I0313 11:01:38.710982 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.718549 master-0 kubenswrapper[33013]: I0313 11:01:38.718509 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.718802 master-0 kubenswrapper[33013]: I0313 11:01:38.718767 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e6646441-9498-4102-94c4-970c7ce534af-config-out\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.725239 master-0 kubenswrapper[33013]: I0313 11:01:38.725190 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e6646441-9498-4102-94c4-970c7ce534af-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:38.727763 master-0 kubenswrapper[33013]: I0313 11:01:38.727704 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpd6r\" (UniqueName: \"kubernetes.io/projected/e6646441-9498-4102-94c4-970c7ce534af-kube-api-access-xpd6r\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:39.219440 master-0 kubenswrapper[33013]: I0313 11:01:39.219352 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:39.219698 master-0 kubenswrapper[33013]: E0313 11:01:39.219605 33013 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 13 11:01:39.219754 master-0 kubenswrapper[33013]: E0313 11:01:39.219742 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls podName:e6646441-9498-4102-94c4-970c7ce534af nodeName:}" failed. No retries permitted until 2026-03-13 11:01:40.219717489 +0000 UTC m=+283.695671008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "e6646441-9498-4102-94c4-970c7ce534af") : secret "alertmanager-main-tls" not found Mar 13 11:01:40.239098 master-0 kubenswrapper[33013]: I0313 11:01:40.239027 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:40.239855 master-0 kubenswrapper[33013]: E0313 11:01:40.239197 33013 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 13 11:01:40.239855 master-0 kubenswrapper[33013]: E0313 11:01:40.239268 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls podName:e6646441-9498-4102-94c4-970c7ce534af nodeName:}" failed. No retries permitted until 2026-03-13 11:01:42.239251689 +0000 UTC m=+285.715205028 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "e6646441-9498-4102-94c4-970c7ce534af") : secret "alertmanager-main-tls" not found Mar 13 11:01:42.271681 master-0 kubenswrapper[33013]: I0313 11:01:42.271572 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:42.272262 master-0 kubenswrapper[33013]: E0313 11:01:42.271807 33013 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 13 11:01:42.272262 master-0 kubenswrapper[33013]: E0313 11:01:42.271909 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls podName:e6646441-9498-4102-94c4-970c7ce534af nodeName:}" failed. No retries permitted until 2026-03-13 11:01:46.271889802 +0000 UTC m=+289.747843151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "e6646441-9498-4102-94c4-970c7ce534af") : secret "alertmanager-main-tls" not found Mar 13 11:01:44.611803 master-0 kubenswrapper[33013]: E0313 11:01:44.611692 33013 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 11:01:44.611803 master-0 kubenswrapper[33013]: E0313 11:01:44.611815 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls podName:d648a041-61d2-4e6b-aa52-d5951ad4edf1 nodeName:}" failed. No retries permitted until 2026-03-13 11:02:00.611796964 +0000 UTC m=+304.087750303 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "d648a041-61d2-4e6b-aa52-d5951ad4edf1") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 13 11:01:44.612703 master-0 kubenswrapper[33013]: I0313 11:01:44.611505 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:44.612703 master-0 kubenswrapper[33013]: I0313 11:01:44.612474 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:01:44.612703 master-0 kubenswrapper[33013]: E0313 11:01:44.612571 33013 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 13 11:01:44.612703 master-0 kubenswrapper[33013]: E0313 11:01:44.612623 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls podName:d648a041-61d2-4e6b-aa52-d5951ad4edf1 nodeName:}" failed. No retries permitted until 2026-03-13 11:02:00.612612567 +0000 UTC m=+304.088565916 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "d648a041-61d2-4e6b-aa52-d5951ad4edf1") : secret "prometheus-k8s-tls" not found Mar 13 11:01:46.343122 master-0 kubenswrapper[33013]: I0313 11:01:46.343055 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:46.343897 master-0 kubenswrapper[33013]: E0313 11:01:46.343260 33013 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 13 11:01:46.343897 master-0 kubenswrapper[33013]: E0313 11:01:46.343352 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls podName:e6646441-9498-4102-94c4-970c7ce534af nodeName:}" failed. No retries permitted until 2026-03-13 11:01:54.34333353 +0000 UTC m=+297.819286879 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "e6646441-9498-4102-94c4-970c7ce534af") : secret "alertmanager-main-tls" not found Mar 13 11:01:47.820163 master-0 kubenswrapper[33013]: I0313 11:01:47.820094 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:47.827737 master-0 kubenswrapper[33013]: I0313 11:01:47.827665 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-57bfbb88b4-c4p4m" Mar 13 11:01:52.546805 master-0 kubenswrapper[33013]: E0313 11:01:52.546735 33013 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7mk1tpvcusf46: secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:01:52.547495 master-0 kubenswrapper[33013]: E0313 11:01:52.546856 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 11:02:56.546836118 +0000 UTC m=+360.022789467 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:01:54.413411 master-0 kubenswrapper[33013]: I0313 11:01:54.413288 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:54.417734 master-0 kubenswrapper[33013]: I0313 11:01:54.417646 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e6646441-9498-4102-94c4-970c7ce534af-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e6646441-9498-4102-94c4-970c7ce534af\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:54.424090 master-0 kubenswrapper[33013]: I0313 11:01:54.423995 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 11:01:54.790272 master-0 kubenswrapper[33013]: I0313 11:01:54.790217 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 11:01:54.796937 master-0 kubenswrapper[33013]: W0313 11:01:54.796861 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6646441_9498_4102_94c4_970c7ce534af.slice/crio-5cef713bca3e26becf652efd48730cbb8d8cfb7d8439e94b364571c7dc1c9610 WatchSource:0}: Error finding container 5cef713bca3e26becf652efd48730cbb8d8cfb7d8439e94b364571c7dc1c9610: Status 404 returned error can't find the container with id 5cef713bca3e26becf652efd48730cbb8d8cfb7d8439e94b364571c7dc1c9610 Mar 13 11:01:54.926347 master-0 kubenswrapper[33013]: I0313 11:01:54.926284 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e6646441-9498-4102-94c4-970c7ce534af","Type":"ContainerStarted","Data":"5cef713bca3e26becf652efd48730cbb8d8cfb7d8439e94b364571c7dc1c9610"} Mar 13 11:01:55.935457 master-0 kubenswrapper[33013]: I0313 11:01:55.935414 33013 generic.go:334] "Generic (PLEG): container finished" podID="e6646441-9498-4102-94c4-970c7ce534af" containerID="2d85ca46c7b0f93de0298081be729a764700ae5c49d69bffbcf1378076b7f783" exitCode=0 Mar 13 11:01:55.935457 master-0 kubenswrapper[33013]: I0313 11:01:55.935464 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e6646441-9498-4102-94c4-970c7ce534af","Type":"ContainerDied","Data":"2d85ca46c7b0f93de0298081be729a764700ae5c49d69bffbcf1378076b7f783"} Mar 13 11:01:56.709934 master-0 kubenswrapper[33013]: I0313 11:01:56.709883 33013 kubelet.go:1505] "Image garbage collection succeeded" Mar 13 11:01:59.965439 master-0 kubenswrapper[33013]: I0313 11:01:59.965368 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e6646441-9498-4102-94c4-970c7ce534af","Type":"ContainerStarted","Data":"1ec0135937d70f67a4da5379ceb6091a49495e30d4d6a332840d39652d5ed49b"} Mar 13 11:01:59.965439 master-0 kubenswrapper[33013]: I0313 11:01:59.965448 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e6646441-9498-4102-94c4-970c7ce534af","Type":"ContainerStarted","Data":"1e1b7a993679ac04b83ce1c5334d3743c8d5114dea6663111c2ea9952ba3db55"} Mar 13 11:01:59.966244 master-0 kubenswrapper[33013]: I0313 11:01:59.965467 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e6646441-9498-4102-94c4-970c7ce534af","Type":"ContainerStarted","Data":"457cd4755ef17575187e022233621db5297c1db718bd443a2712a051635f7e0f"} Mar 13 11:01:59.966244 master-0 kubenswrapper[33013]: I0313 11:01:59.965480 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e6646441-9498-4102-94c4-970c7ce534af","Type":"ContainerStarted","Data":"34d574f89dc3a843ea435e8c5eaafddeece7c81d5869df131418d033c0cdbd25"} Mar 13 11:02:00.624471 master-0 kubenswrapper[33013]: I0313 11:02:00.624397 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:02:00.624731 master-0 kubenswrapper[33013]: I0313 11:02:00.624524 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:02:00.628561 master-0 kubenswrapper[33013]: I0313 11:02:00.628520 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:02:00.629203 master-0 kubenswrapper[33013]: I0313 11:02:00.629143 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/d648a041-61d2-4e6b-aa52-d5951ad4edf1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"d648a041-61d2-4e6b-aa52-d5951ad4edf1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:02:00.654388 master-0 kubenswrapper[33013]: I0313 11:02:00.654329 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:02:00.979862 master-0 kubenswrapper[33013]: I0313 11:02:00.979819 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e6646441-9498-4102-94c4-970c7ce534af","Type":"ContainerStarted","Data":"46e54ca434cb8d17e59b0300e6da8c258932f887a49cc141c59551674644132a"} Mar 13 11:02:00.979862 master-0 kubenswrapper[33013]: I0313 11:02:00.979862 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e6646441-9498-4102-94c4-970c7ce534af","Type":"ContainerStarted","Data":"6cb3e77e7a4e92b383b0e6c2d8a8ad4c807e3fa68de23d956f9d2b8f9dc1430c"} Mar 13 11:02:01.261872 master-0 kubenswrapper[33013]: I0313 11:02:01.261684 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=19.851741131 podStartE2EDuration="23.261662231s" podCreationTimestamp="2026-03-13 11:01:38 +0000 UTC" firstStartedPulling="2026-03-13 11:01:55.937065802 +0000 UTC m=+299.413019151" lastFinishedPulling="2026-03-13 11:01:59.346986902 +0000 UTC m=+302.822940251" observedRunningTime="2026-03-13 11:02:01.257468352 +0000 UTC m=+304.733421711" watchObservedRunningTime="2026-03-13 11:02:01.261662231 +0000 UTC m=+304.737615580" Mar 13 11:02:01.327646 master-0 kubenswrapper[33013]: I0313 11:02:01.320694 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 11:02:01.348401 master-0 kubenswrapper[33013]: W0313 11:02:01.348356 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd648a041_61d2_4e6b_aa52_d5951ad4edf1.slice/crio-fe15b88d6ae62bc570a7f16fee2a32bf237d58826f3f4dffaedaca931d092b49 WatchSource:0}: Error finding container fe15b88d6ae62bc570a7f16fee2a32bf237d58826f3f4dffaedaca931d092b49: Status 404 returned error can't find the container with id fe15b88d6ae62bc570a7f16fee2a32bf237d58826f3f4dffaedaca931d092b49 Mar 13 11:02:01.990351 master-0 kubenswrapper[33013]: I0313 11:02:01.989323 33013 generic.go:334] "Generic (PLEG): container finished" podID="d648a041-61d2-4e6b-aa52-d5951ad4edf1" containerID="ac56f7f3ba480ae6b708a51e1d6469c208e09be69861933bc29d5b304f2a5fbc" exitCode=0 Mar 13 11:02:01.990351 master-0 kubenswrapper[33013]: I0313 11:02:01.989770 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d648a041-61d2-4e6b-aa52-d5951ad4edf1","Type":"ContainerDied","Data":"ac56f7f3ba480ae6b708a51e1d6469c208e09be69861933bc29d5b304f2a5fbc"} Mar 13 11:02:01.990351 master-0 kubenswrapper[33013]: I0313 11:02:01.989806 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d648a041-61d2-4e6b-aa52-d5951ad4edf1","Type":"ContainerStarted","Data":"fe15b88d6ae62bc570a7f16fee2a32bf237d58826f3f4dffaedaca931d092b49"} Mar 13 11:02:07.038839 master-0 kubenswrapper[33013]: I0313 11:02:07.038742 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d648a041-61d2-4e6b-aa52-d5951ad4edf1","Type":"ContainerStarted","Data":"bd9a4b567627b9258eaf3ee5531abf1f7dc1b68ae8ccd8c4822f8d6d5c602414"} Mar 13 11:02:07.038839 master-0 kubenswrapper[33013]: I0313 11:02:07.038793 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d648a041-61d2-4e6b-aa52-d5951ad4edf1","Type":"ContainerStarted","Data":"dab6ca0913b8eee9712a65c98685ac5c0b8b5fc60b60535c4a181c2166da8c9b"} Mar 13 11:02:07.038839 master-0 kubenswrapper[33013]: I0313 11:02:07.038802 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d648a041-61d2-4e6b-aa52-d5951ad4edf1","Type":"ContainerStarted","Data":"2a63fa5e991224dc783f0dc1c666d44ae703ee6e618d8b3f710eb2f86a291821"} Mar 13 11:02:07.038839 master-0 kubenswrapper[33013]: I0313 11:02:07.038811 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d648a041-61d2-4e6b-aa52-d5951ad4edf1","Type":"ContainerStarted","Data":"b0cfdd875795d15ab67b14da9d0a1d30ef1d88fe59458743669be14ad0c80e90"} Mar 13 11:02:07.038839 master-0 kubenswrapper[33013]: I0313 11:02:07.038822 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d648a041-61d2-4e6b-aa52-d5951ad4edf1","Type":"ContainerStarted","Data":"02cc187ba962e30a4b4458ab02e5197941a4833c20120721eec5b6f9f29050c1"} Mar 13 11:02:08.052895 master-0 kubenswrapper[33013]: I0313 11:02:08.052832 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"d648a041-61d2-4e6b-aa52-d5951ad4edf1","Type":"ContainerStarted","Data":"3703d071be7e68dd93292ba3c755c6f5792340234fc7bae6beb2864ad859d13d"} Mar 13 11:02:08.209245 master-0 kubenswrapper[33013]: I0313 11:02:08.209128 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=36.13949956 podStartE2EDuration="40.209105386s" podCreationTimestamp="2026-03-13 11:01:28 +0000 UTC" firstStartedPulling="2026-03-13 11:02:01.991409547 +0000 UTC m=+305.467362896" lastFinishedPulling="2026-03-13 11:02:06.061015373 +0000 UTC m=+309.536968722" observedRunningTime="2026-03-13 11:02:08.200549694 +0000 UTC m=+311.676503063" watchObservedRunningTime="2026-03-13 11:02:08.209105386 +0000 UTC m=+311.685058735" Mar 13 11:02:10.655005 master-0 kubenswrapper[33013]: I0313 11:02:10.654951 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:02:33.301156 master-0 kubenswrapper[33013]: I0313 11:02:33.299390 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 13 11:02:33.301156 master-0 kubenswrapper[33013]: I0313 11:02:33.300907 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 11:02:33.303332 master-0 kubenswrapper[33013]: I0313 11:02:33.303266 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-928wn" Mar 13 11:02:33.305334 master-0 kubenswrapper[33013]: I0313 11:02:33.303575 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 11:02:33.308146 master-0 kubenswrapper[33013]: I0313 11:02:33.308043 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 13 11:02:33.435932 master-0 kubenswrapper[33013]: I0313 11:02:33.435784 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31da838d-aec3-43f1-8eb5-69b65aa77cf6-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 11:02:33.435932 master-0 kubenswrapper[33013]: I0313 11:02:33.435948 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31da838d-aec3-43f1-8eb5-69b65aa77cf6-var-lock\") pod \"installer-5-master-0\" (UID: \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 11:02:33.436254 master-0 kubenswrapper[33013]: I0313 11:02:33.435974 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31da838d-aec3-43f1-8eb5-69b65aa77cf6-kube-api-access\") pod \"installer-5-master-0\" (UID: \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 11:02:33.538913 master-0 kubenswrapper[33013]: I0313 11:02:33.538805 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31da838d-aec3-43f1-8eb5-69b65aa77cf6-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 11:02:33.538913 master-0 kubenswrapper[33013]: I0313 11:02:33.538919 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31da838d-aec3-43f1-8eb5-69b65aa77cf6-var-lock\") pod \"installer-5-master-0\" (UID: \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 11:02:33.539306 master-0 kubenswrapper[33013]: I0313 11:02:33.539145 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31da838d-aec3-43f1-8eb5-69b65aa77cf6-kube-api-access\") pod \"installer-5-master-0\" (UID: \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 11:02:33.539306 master-0 kubenswrapper[33013]: I0313 11:02:33.539242 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31da838d-aec3-43f1-8eb5-69b65aa77cf6-var-lock\") pod \"installer-5-master-0\" (UID: \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 11:02:33.540029 master-0 kubenswrapper[33013]: I0313 11:02:33.539989 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31da838d-aec3-43f1-8eb5-69b65aa77cf6-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 11:02:33.559570 master-0 kubenswrapper[33013]: I0313 11:02:33.559441 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31da838d-aec3-43f1-8eb5-69b65aa77cf6-kube-api-access\") pod \"installer-5-master-0\" (UID: \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 11:02:33.627463 master-0 kubenswrapper[33013]: I0313 11:02:33.627374 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 11:02:34.088902 master-0 kubenswrapper[33013]: I0313 11:02:34.088628 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 13 11:02:34.250863 master-0 kubenswrapper[33013]: I0313 11:02:34.250765 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"31da838d-aec3-43f1-8eb5-69b65aa77cf6","Type":"ContainerStarted","Data":"c33d8229809e7f651d53b2e0e77d8417ef5035a8bf0186ca216323edec8604bf"} Mar 13 11:02:35.265981 master-0 kubenswrapper[33013]: I0313 11:02:35.265861 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"31da838d-aec3-43f1-8eb5-69b65aa77cf6","Type":"ContainerStarted","Data":"9d71f53249b118d6a8fd08413a6e235fe6ef13bfc05e3e1324c5de59da298ead"} Mar 13 11:02:35.293160 master-0 kubenswrapper[33013]: I0313 11:02:35.293060 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-0" podStartSLOduration=2.293029965 podStartE2EDuration="2.293029965s" podCreationTimestamp="2026-03-13 11:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:02:35.287826168 +0000 UTC m=+338.763779527" watchObservedRunningTime="2026-03-13 11:02:35.293029965 +0000 UTC m=+338.768983314" Mar 13 11:02:56.641181 master-0 kubenswrapper[33013]: E0313 11:02:56.641084 33013 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7mk1tpvcusf46: secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:02:56.641181 master-0 kubenswrapper[33013]: E0313 11:02:56.641173 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle podName:b68ed803-45e2-42f1-99b1-33cf59b01d74 nodeName:}" failed. No retries permitted until 2026-03-13 11:04:58.641153238 +0000 UTC m=+482.117106587 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle") pod "metrics-server-68597ccc5b-xrb8c" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74") : secret "metrics-server-7mk1tpvcusf46" not found Mar 13 11:03:00.654970 master-0 kubenswrapper[33013]: I0313 11:03:00.654886 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:03:00.689047 master-0 kubenswrapper[33013]: I0313 11:03:00.688990 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:03:01.498478 master-0 kubenswrapper[33013]: I0313 11:03:01.498380 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 11:03:07.251407 master-0 kubenswrapper[33013]: I0313 11:03:07.251325 33013 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 11:03:07.252155 master-0 kubenswrapper[33013]: I0313 11:03:07.251855 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager" containerID="cri-o://f64a190ab6bfcd5d71dd09d08481400c5646db74ded1e7ad4ac16e4a9b0b9632" gracePeriod=30 Mar 13 11:03:07.252155 master-0 kubenswrapper[33013]: I0313 11:03:07.251933 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://8ef4ca3fd55a1fdc272bbe95b06fd59615f0875eb40d0760256756564104e8c0" gracePeriod=30 Mar 13 11:03:07.252155 master-0 kubenswrapper[33013]: I0313 11:03:07.251987 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://c628e765eaabffc23db2c1635eeb15519da1c1cbfb8a52269fa9da1481c956a3" gracePeriod=30 Mar 13 11:03:07.252155 master-0 kubenswrapper[33013]: I0313 11:03:07.252018 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="6aa84d96c35221e650d254cec915ee90" containerName="cluster-policy-controller" containerID="cri-o://9055e315c8a514a2e7caff4002ccd935f6b8f26c1543cb6f8b2224217493efae" gracePeriod=30 Mar 13 11:03:07.253041 master-0 kubenswrapper[33013]: I0313 11:03:07.252873 33013 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 11:03:07.253370 master-0 kubenswrapper[33013]: E0313 11:03:07.253289 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager" Mar 13 11:03:07.253370 master-0 kubenswrapper[33013]: I0313 11:03:07.253326 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager" Mar 13 11:03:07.253370 master-0 kubenswrapper[33013]: E0313 11:03:07.253340 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager" Mar 13 11:03:07.253370 master-0 kubenswrapper[33013]: I0313 11:03:07.253353 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager" Mar 13 11:03:07.253543 master-0 kubenswrapper[33013]: E0313 11:03:07.253379 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager-recovery-controller" Mar 13 11:03:07.253543 master-0 kubenswrapper[33013]: I0313 11:03:07.253393 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager-recovery-controller" Mar 13 11:03:07.253543 master-0 kubenswrapper[33013]: E0313 11:03:07.253418 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager-cert-syncer" Mar 13 11:03:07.253543 master-0 kubenswrapper[33013]: I0313 11:03:07.253428 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager-cert-syncer" Mar 13 11:03:07.253543 master-0 kubenswrapper[33013]: E0313 11:03:07.253451 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa84d96c35221e650d254cec915ee90" containerName="cluster-policy-controller" Mar 13 11:03:07.253543 master-0 kubenswrapper[33013]: I0313 11:03:07.253462 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa84d96c35221e650d254cec915ee90" containerName="cluster-policy-controller" Mar 13 11:03:07.253827 master-0 kubenswrapper[33013]: I0313 11:03:07.253811 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager-cert-syncer" Mar 13 11:03:07.253950 master-0 kubenswrapper[33013]: I0313 11:03:07.253860 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager" Mar 13 11:03:07.253950 master-0 kubenswrapper[33013]: I0313 11:03:07.253882 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager" Mar 13 11:03:07.253950 master-0 kubenswrapper[33013]: I0313 11:03:07.253902 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager-recovery-controller" Mar 13 11:03:07.253950 master-0 kubenswrapper[33013]: I0313 11:03:07.253935 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa84d96c35221e650d254cec915ee90" containerName="cluster-policy-controller" Mar 13 11:03:07.254123 master-0 kubenswrapper[33013]: I0313 11:03:07.253956 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager" Mar 13 11:03:07.254218 master-0 kubenswrapper[33013]: E0313 11:03:07.254203 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager" Mar 13 11:03:07.254281 master-0 kubenswrapper[33013]: I0313 11:03:07.254222 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa84d96c35221e650d254cec915ee90" containerName="kube-controller-manager" Mar 13 11:03:07.410081 master-0 kubenswrapper[33013]: I0313 11:03:07.410002 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5ad616f4656fbce37c87f129f788ab06-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5ad616f4656fbce37c87f129f788ab06\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:07.410276 master-0 kubenswrapper[33013]: I0313 11:03:07.410219 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5ad616f4656fbce37c87f129f788ab06-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5ad616f4656fbce37c87f129f788ab06\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:07.443647 master-0 kubenswrapper[33013]: I0313 11:03:07.443581 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_6aa84d96c35221e650d254cec915ee90/kube-controller-manager/1.log" Mar 13 11:03:07.444809 master-0 kubenswrapper[33013]: I0313 11:03:07.444772 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_6aa84d96c35221e650d254cec915ee90/kube-controller-manager-cert-syncer/0.log" Mar 13 11:03:07.445382 master-0 kubenswrapper[33013]: I0313 11:03:07.445348 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:07.449718 master-0 kubenswrapper[33013]: I0313 11:03:07.449675 33013 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="6aa84d96c35221e650d254cec915ee90" podUID="5ad616f4656fbce37c87f129f788ab06" Mar 13 11:03:07.513840 master-0 kubenswrapper[33013]: I0313 11:03:07.513720 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-cert-dir\") pod \"6aa84d96c35221e650d254cec915ee90\" (UID: \"6aa84d96c35221e650d254cec915ee90\") " Mar 13 11:03:07.514013 master-0 kubenswrapper[33013]: I0313 11:03:07.513844 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-resource-dir\") pod \"6aa84d96c35221e650d254cec915ee90\" (UID: \"6aa84d96c35221e650d254cec915ee90\") " Mar 13 11:03:07.514013 master-0 kubenswrapper[33013]: I0313 11:03:07.513925 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "6aa84d96c35221e650d254cec915ee90" (UID: "6aa84d96c35221e650d254cec915ee90"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:03:07.514111 master-0 kubenswrapper[33013]: I0313 11:03:07.514008 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "6aa84d96c35221e650d254cec915ee90" (UID: "6aa84d96c35221e650d254cec915ee90"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:03:07.515010 master-0 kubenswrapper[33013]: I0313 11:03:07.514972 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5ad616f4656fbce37c87f129f788ab06-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5ad616f4656fbce37c87f129f788ab06\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:07.515172 master-0 kubenswrapper[33013]: I0313 11:03:07.515133 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5ad616f4656fbce37c87f129f788ab06-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5ad616f4656fbce37c87f129f788ab06\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:07.515238 master-0 kubenswrapper[33013]: I0313 11:03:07.515185 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5ad616f4656fbce37c87f129f788ab06-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5ad616f4656fbce37c87f129f788ab06\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:07.515298 master-0 kubenswrapper[33013]: I0313 11:03:07.515272 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5ad616f4656fbce37c87f129f788ab06-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5ad616f4656fbce37c87f129f788ab06\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:07.515619 master-0 kubenswrapper[33013]: I0313 11:03:07.515551 33013 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 11:03:07.515619 master-0 kubenswrapper[33013]: I0313 11:03:07.515584 33013 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6aa84d96c35221e650d254cec915ee90-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 11:03:07.516461 master-0 kubenswrapper[33013]: I0313 11:03:07.516422 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_6aa84d96c35221e650d254cec915ee90/kube-controller-manager/1.log" Mar 13 11:03:07.517828 master-0 kubenswrapper[33013]: I0313 11:03:07.517788 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_6aa84d96c35221e650d254cec915ee90/kube-controller-manager-cert-syncer/0.log" Mar 13 11:03:07.518421 master-0 kubenswrapper[33013]: I0313 11:03:07.518373 33013 generic.go:334] "Generic (PLEG): container finished" podID="6aa84d96c35221e650d254cec915ee90" containerID="f64a190ab6bfcd5d71dd09d08481400c5646db74ded1e7ad4ac16e4a9b0b9632" exitCode=0 Mar 13 11:03:07.518421 master-0 kubenswrapper[33013]: I0313 11:03:07.518407 33013 generic.go:334] "Generic (PLEG): container finished" podID="6aa84d96c35221e650d254cec915ee90" containerID="8ef4ca3fd55a1fdc272bbe95b06fd59615f0875eb40d0760256756564104e8c0" exitCode=0 Mar 13 11:03:07.518421 master-0 kubenswrapper[33013]: I0313 11:03:07.518418 33013 generic.go:334] "Generic (PLEG): container finished" podID="6aa84d96c35221e650d254cec915ee90" containerID="c628e765eaabffc23db2c1635eeb15519da1c1cbfb8a52269fa9da1481c956a3" exitCode=2 Mar 13 11:03:07.519267 master-0 kubenswrapper[33013]: I0313 11:03:07.518430 33013 generic.go:334] "Generic (PLEG): container finished" podID="6aa84d96c35221e650d254cec915ee90" containerID="9055e315c8a514a2e7caff4002ccd935f6b8f26c1543cb6f8b2224217493efae" exitCode=0 Mar 13 11:03:07.519267 master-0 kubenswrapper[33013]: I0313 11:03:07.518493 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:07.519267 master-0 kubenswrapper[33013]: I0313 11:03:07.518519 33013 scope.go:117] "RemoveContainer" containerID="4bc2563e4687b16637c0689e693842fbd842df192a43d7ee20a7af39f977383e" Mar 13 11:03:07.519267 master-0 kubenswrapper[33013]: I0313 11:03:07.518494 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7ee7c0157ea1a3aa7abb463c26a21b9f9c80a7c51726be3dad9c08112783426" Mar 13 11:03:07.521896 master-0 kubenswrapper[33013]: I0313 11:03:07.521842 33013 generic.go:334] "Generic (PLEG): container finished" podID="31da838d-aec3-43f1-8eb5-69b65aa77cf6" containerID="9d71f53249b118d6a8fd08413a6e235fe6ef13bfc05e3e1324c5de59da298ead" exitCode=0 Mar 13 11:03:07.521983 master-0 kubenswrapper[33013]: I0313 11:03:07.521899 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"31da838d-aec3-43f1-8eb5-69b65aa77cf6","Type":"ContainerDied","Data":"9d71f53249b118d6a8fd08413a6e235fe6ef13bfc05e3e1324c5de59da298ead"} Mar 13 11:03:07.522809 master-0 kubenswrapper[33013]: I0313 11:03:07.522759 33013 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="6aa84d96c35221e650d254cec915ee90" podUID="5ad616f4656fbce37c87f129f788ab06" Mar 13 11:03:07.574135 master-0 kubenswrapper[33013]: I0313 11:03:07.574072 33013 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="6aa84d96c35221e650d254cec915ee90" podUID="5ad616f4656fbce37c87f129f788ab06" Mar 13 11:03:08.532076 master-0 kubenswrapper[33013]: I0313 11:03:08.532009 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_6aa84d96c35221e650d254cec915ee90/kube-controller-manager-cert-syncer/0.log" Mar 13 11:03:08.723290 master-0 kubenswrapper[33013]: I0313 11:03:08.723215 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa84d96c35221e650d254cec915ee90" path="/var/lib/kubelet/pods/6aa84d96c35221e650d254cec915ee90/volumes" Mar 13 11:03:08.830017 master-0 kubenswrapper[33013]: I0313 11:03:08.829936 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 11:03:08.834897 master-0 kubenswrapper[33013]: I0313 11:03:08.834848 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31da838d-aec3-43f1-8eb5-69b65aa77cf6-var-lock\") pod \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\" (UID: \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\") " Mar 13 11:03:08.835056 master-0 kubenswrapper[33013]: I0313 11:03:08.834932 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31da838d-aec3-43f1-8eb5-69b65aa77cf6-kube-api-access\") pod \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\" (UID: \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\") " Mar 13 11:03:08.835056 master-0 kubenswrapper[33013]: I0313 11:03:08.834953 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31da838d-aec3-43f1-8eb5-69b65aa77cf6-kubelet-dir\") pod \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\" (UID: \"31da838d-aec3-43f1-8eb5-69b65aa77cf6\") " Mar 13 11:03:08.835056 master-0 kubenswrapper[33013]: I0313 11:03:08.834993 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31da838d-aec3-43f1-8eb5-69b65aa77cf6-var-lock" (OuterVolumeSpecName: "var-lock") pod "31da838d-aec3-43f1-8eb5-69b65aa77cf6" (UID: "31da838d-aec3-43f1-8eb5-69b65aa77cf6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:03:08.835190 master-0 kubenswrapper[33013]: I0313 11:03:08.835106 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31da838d-aec3-43f1-8eb5-69b65aa77cf6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "31da838d-aec3-43f1-8eb5-69b65aa77cf6" (UID: "31da838d-aec3-43f1-8eb5-69b65aa77cf6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:03:08.835369 master-0 kubenswrapper[33013]: I0313 11:03:08.835343 33013 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31da838d-aec3-43f1-8eb5-69b65aa77cf6-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 11:03:08.835369 master-0 kubenswrapper[33013]: I0313 11:03:08.835364 33013 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31da838d-aec3-43f1-8eb5-69b65aa77cf6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 11:03:08.838770 master-0 kubenswrapper[33013]: I0313 11:03:08.838705 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31da838d-aec3-43f1-8eb5-69b65aa77cf6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "31da838d-aec3-43f1-8eb5-69b65aa77cf6" (UID: "31da838d-aec3-43f1-8eb5-69b65aa77cf6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:03:08.937172 master-0 kubenswrapper[33013]: I0313 11:03:08.937086 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31da838d-aec3-43f1-8eb5-69b65aa77cf6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 11:03:09.541007 master-0 kubenswrapper[33013]: I0313 11:03:09.540947 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"31da838d-aec3-43f1-8eb5-69b65aa77cf6","Type":"ContainerDied","Data":"c33d8229809e7f651d53b2e0e77d8417ef5035a8bf0186ca216323edec8604bf"} Mar 13 11:03:09.541007 master-0 kubenswrapper[33013]: I0313 11:03:09.540992 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c33d8229809e7f651d53b2e0e77d8417ef5035a8bf0186ca216323edec8604bf" Mar 13 11:03:09.541007 master-0 kubenswrapper[33013]: I0313 11:03:09.541005 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 11:03:20.713079 master-0 kubenswrapper[33013]: I0313 11:03:20.713002 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:20.734765 master-0 kubenswrapper[33013]: I0313 11:03:20.734720 33013 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a271a635-3312-4dc0-b52a-c524d9799e6d" Mar 13 11:03:20.734909 master-0 kubenswrapper[33013]: I0313 11:03:20.734790 33013 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a271a635-3312-4dc0-b52a-c524d9799e6d" Mar 13 11:03:20.749885 master-0 kubenswrapper[33013]: I0313 11:03:20.749815 33013 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:20.752174 master-0 kubenswrapper[33013]: I0313 11:03:20.751259 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 11:03:20.762661 master-0 kubenswrapper[33013]: I0313 11:03:20.761966 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 11:03:20.766815 master-0 kubenswrapper[33013]: I0313 11:03:20.766460 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:20.772794 master-0 kubenswrapper[33013]: I0313 11:03:20.772756 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 11:03:20.797281 master-0 kubenswrapper[33013]: W0313 11:03:20.797210 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ad616f4656fbce37c87f129f788ab06.slice/crio-f51600c2803a6da982844d91cc5c8e1e09dee53653325eadc91194ed0fbc6de9 WatchSource:0}: Error finding container f51600c2803a6da982844d91cc5c8e1e09dee53653325eadc91194ed0fbc6de9: Status 404 returned error can't find the container with id f51600c2803a6da982844d91cc5c8e1e09dee53653325eadc91194ed0fbc6de9 Mar 13 11:03:21.639440 master-0 kubenswrapper[33013]: I0313 11:03:21.639381 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5ad616f4656fbce37c87f129f788ab06","Type":"ContainerStarted","Data":"ffea2c721d046fdcb79151d79faebd8b86873ac139977b00b032e884c46d35d1"} Mar 13 11:03:21.639440 master-0 kubenswrapper[33013]: I0313 11:03:21.639428 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5ad616f4656fbce37c87f129f788ab06","Type":"ContainerStarted","Data":"12ca1d28ec4fa210c07603b0e4a7ace52df4ce318b60307bded1cef72524382f"} Mar 13 11:03:21.639440 master-0 kubenswrapper[33013]: I0313 11:03:21.639438 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5ad616f4656fbce37c87f129f788ab06","Type":"ContainerStarted","Data":"d15978ad368260f4970900195660ff329f9fae6bd88e7fd9c0e8ad7ebca05134"} Mar 13 11:03:21.639440 master-0 kubenswrapper[33013]: I0313 11:03:21.639447 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5ad616f4656fbce37c87f129f788ab06","Type":"ContainerStarted","Data":"f51600c2803a6da982844d91cc5c8e1e09dee53653325eadc91194ed0fbc6de9"} Mar 13 11:03:22.649351 master-0 kubenswrapper[33013]: I0313 11:03:22.649282 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5ad616f4656fbce37c87f129f788ab06","Type":"ContainerStarted","Data":"a8d36f4ec7cb5043d0b01f26d0a020f258a5fb7a2aced96a279ac3855909053b"} Mar 13 11:03:22.674225 master-0 kubenswrapper[33013]: I0313 11:03:22.674071 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.674047975 podStartE2EDuration="2.674047975s" podCreationTimestamp="2026-03-13 11:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:03:22.672491581 +0000 UTC m=+386.148444940" watchObservedRunningTime="2026-03-13 11:03:22.674047975 +0000 UTC m=+386.150001324" Mar 13 11:03:30.767356 master-0 kubenswrapper[33013]: I0313 11:03:30.767266 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:30.767356 master-0 kubenswrapper[33013]: I0313 11:03:30.767330 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:30.767356 master-0 kubenswrapper[33013]: I0313 11:03:30.767341 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:30.767356 master-0 kubenswrapper[33013]: I0313 11:03:30.767351 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:30.768339 master-0 kubenswrapper[33013]: I0313 11:03:30.767966 33013 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 11:03:30.768339 master-0 kubenswrapper[33013]: I0313 11:03:30.768049 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5ad616f4656fbce37c87f129f788ab06" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 11:03:30.771361 master-0 kubenswrapper[33013]: I0313 11:03:30.771320 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:31.726393 master-0 kubenswrapper[33013]: I0313 11:03:31.726328 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:37.760520 master-0 kubenswrapper[33013]: I0313 11:03:37.760374 33013 generic.go:334] "Generic (PLEG): container finished" podID="b68ed803-45e2-42f1-99b1-33cf59b01d74" containerID="a53ccb10d38781462661d28f14cee8ad4f8374b8664112cbbcf7c91c9615f04e" exitCode=0 Mar 13 11:03:37.760520 master-0 kubenswrapper[33013]: I0313 11:03:37.760431 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" event={"ID":"b68ed803-45e2-42f1-99b1-33cf59b01d74","Type":"ContainerDied","Data":"a53ccb10d38781462661d28f14cee8ad4f8374b8664112cbbcf7c91c9615f04e"} Mar 13 11:03:37.853356 master-0 kubenswrapper[33013]: I0313 11:03:37.853328 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 11:03:37.898513 master-0 kubenswrapper[33013]: I0313 11:03:37.898424 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5hq9\" (UniqueName: \"kubernetes.io/projected/b68ed803-45e2-42f1-99b1-33cf59b01d74-kube-api-access-q5hq9\") pod \"b68ed803-45e2-42f1-99b1-33cf59b01d74\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " Mar 13 11:03:37.898792 master-0 kubenswrapper[33013]: I0313 11:03:37.898603 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-configmap-kubelet-serving-ca-bundle\") pod \"b68ed803-45e2-42f1-99b1-33cf59b01d74\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " Mar 13 11:03:37.898792 master-0 kubenswrapper[33013]: I0313 11:03:37.898639 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle\") pod \"b68ed803-45e2-42f1-99b1-33cf59b01d74\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " Mar 13 11:03:37.898792 master-0 kubenswrapper[33013]: I0313 11:03:37.898690 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b68ed803-45e2-42f1-99b1-33cf59b01d74-audit-log\") pod \"b68ed803-45e2-42f1-99b1-33cf59b01d74\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " Mar 13 11:03:37.898792 master-0 kubenswrapper[33013]: I0313 11:03:37.898769 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls\") pod \"b68ed803-45e2-42f1-99b1-33cf59b01d74\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " Mar 13 11:03:37.898792 master-0 kubenswrapper[33013]: I0313 11:03:37.898789 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-client-certs\") pod \"b68ed803-45e2-42f1-99b1-33cf59b01d74\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " Mar 13 11:03:37.899022 master-0 kubenswrapper[33013]: I0313 11:03:37.898845 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-metrics-server-audit-profiles\") pod \"b68ed803-45e2-42f1-99b1-33cf59b01d74\" (UID: \"b68ed803-45e2-42f1-99b1-33cf59b01d74\") " Mar 13 11:03:37.899387 master-0 kubenswrapper[33013]: I0313 11:03:37.899316 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "b68ed803-45e2-42f1-99b1-33cf59b01d74" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:03:37.899387 master-0 kubenswrapper[33013]: I0313 11:03:37.899361 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b68ed803-45e2-42f1-99b1-33cf59b01d74-audit-log" (OuterVolumeSpecName: "audit-log") pod "b68ed803-45e2-42f1-99b1-33cf59b01d74" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:03:37.899770 master-0 kubenswrapper[33013]: I0313 11:03:37.899736 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "b68ed803-45e2-42f1-99b1-33cf59b01d74" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:03:37.904710 master-0 kubenswrapper[33013]: I0313 11:03:37.902727 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "b68ed803-45e2-42f1-99b1-33cf59b01d74" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:03:37.904710 master-0 kubenswrapper[33013]: I0313 11:03:37.902746 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "b68ed803-45e2-42f1-99b1-33cf59b01d74" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:03:37.904710 master-0 kubenswrapper[33013]: I0313 11:03:37.902755 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b68ed803-45e2-42f1-99b1-33cf59b01d74-kube-api-access-q5hq9" (OuterVolumeSpecName: "kube-api-access-q5hq9") pod "b68ed803-45e2-42f1-99b1-33cf59b01d74" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74"). InnerVolumeSpecName "kube-api-access-q5hq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:03:37.912277 master-0 kubenswrapper[33013]: I0313 11:03:37.912179 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "b68ed803-45e2-42f1-99b1-33cf59b01d74" (UID: "b68ed803-45e2-42f1-99b1-33cf59b01d74"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:03:38.000930 master-0 kubenswrapper[33013]: I0313 11:03:38.000855 33013 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 11:03:38.000930 master-0 kubenswrapper[33013]: I0313 11:03:38.000915 33013 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:03:38.000930 master-0 kubenswrapper[33013]: I0313 11:03:38.000927 33013 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 13 11:03:38.000930 master-0 kubenswrapper[33013]: I0313 11:03:38.000938 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5hq9\" (UniqueName: \"kubernetes.io/projected/b68ed803-45e2-42f1-99b1-33cf59b01d74-kube-api-access-q5hq9\") on node \"master-0\" DevicePath \"\"" Mar 13 11:03:38.000930 master-0 kubenswrapper[33013]: I0313 11:03:38.000950 33013 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68ed803-45e2-42f1-99b1-33cf59b01d74-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:03:38.000930 master-0 kubenswrapper[33013]: I0313 11:03:38.000961 33013 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68ed803-45e2-42f1-99b1-33cf59b01d74-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:03:38.001736 master-0 kubenswrapper[33013]: I0313 11:03:38.000988 33013 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b68ed803-45e2-42f1-99b1-33cf59b01d74-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 13 11:03:38.773721 master-0 kubenswrapper[33013]: I0313 11:03:38.773660 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" event={"ID":"b68ed803-45e2-42f1-99b1-33cf59b01d74","Type":"ContainerDied","Data":"823dd75fa90067312411e552ed573617320e3f633eba91399bcfb19342dfaab8"} Mar 13 11:03:38.773721 master-0 kubenswrapper[33013]: I0313 11:03:38.773728 33013 scope.go:117] "RemoveContainer" containerID="a53ccb10d38781462661d28f14cee8ad4f8374b8664112cbbcf7c91c9615f04e" Mar 13 11:03:38.775675 master-0 kubenswrapper[33013]: I0313 11:03:38.773887 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68597ccc5b-xrb8c" Mar 13 11:03:38.800859 master-0 kubenswrapper[33013]: I0313 11:03:38.800779 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-68597ccc5b-xrb8c"] Mar 13 11:03:38.806961 master-0 kubenswrapper[33013]: I0313 11:03:38.806904 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-68597ccc5b-xrb8c"] Mar 13 11:03:40.721462 master-0 kubenswrapper[33013]: I0313 11:03:40.721377 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b68ed803-45e2-42f1-99b1-33cf59b01d74" path="/var/lib/kubelet/pods/b68ed803-45e2-42f1-99b1-33cf59b01d74/volumes" Mar 13 11:03:40.768234 master-0 kubenswrapper[33013]: I0313 11:03:40.768174 33013 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 11:03:40.768473 master-0 kubenswrapper[33013]: I0313 11:03:40.768243 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5ad616f4656fbce37c87f129f788ab06" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 11:03:50.767501 master-0 kubenswrapper[33013]: I0313 11:03:50.767428 33013 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 11:03:50.768117 master-0 kubenswrapper[33013]: I0313 11:03:50.767520 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5ad616f4656fbce37c87f129f788ab06" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 11:03:50.768117 master-0 kubenswrapper[33013]: I0313 11:03:50.767604 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:03:50.768395 master-0 kubenswrapper[33013]: I0313 11:03:50.768353 33013 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"d15978ad368260f4970900195660ff329f9fae6bd88e7fd9c0e8ad7ebca05134"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 11:03:50.768524 master-0 kubenswrapper[33013]: I0313 11:03:50.768490 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5ad616f4656fbce37c87f129f788ab06" containerName="kube-controller-manager" containerID="cri-o://d15978ad368260f4970900195660ff329f9fae6bd88e7fd9c0e8ad7ebca05134" gracePeriod=30 Mar 13 11:04:05.843204 master-0 kubenswrapper[33013]: I0313 11:04:05.843090 33013 scope.go:117] "RemoveContainer" containerID="9055e315c8a514a2e7caff4002ccd935f6b8f26c1543cb6f8b2224217493efae" Mar 13 11:04:05.865077 master-0 kubenswrapper[33013]: I0313 11:04:05.864997 33013 scope.go:117] "RemoveContainer" containerID="8ef4ca3fd55a1fdc272bbe95b06fd59615f0875eb40d0760256756564104e8c0" Mar 13 11:04:05.880946 master-0 kubenswrapper[33013]: I0313 11:04:05.879798 33013 scope.go:117] "RemoveContainer" containerID="c628e765eaabffc23db2c1635eeb15519da1c1cbfb8a52269fa9da1481c956a3" Mar 13 11:04:20.258303 master-0 kubenswrapper[33013]: I0313 11:04:20.250666 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-6dd6777c94-dp5bz"] Mar 13 11:04:20.258303 master-0 kubenswrapper[33013]: E0313 11:04:20.251039 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31da838d-aec3-43f1-8eb5-69b65aa77cf6" containerName="installer" Mar 13 11:04:20.258303 master-0 kubenswrapper[33013]: I0313 11:04:20.251057 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="31da838d-aec3-43f1-8eb5-69b65aa77cf6" containerName="installer" Mar 13 11:04:20.258303 master-0 kubenswrapper[33013]: E0313 11:04:20.251122 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68ed803-45e2-42f1-99b1-33cf59b01d74" containerName="metrics-server" Mar 13 11:04:20.258303 master-0 kubenswrapper[33013]: I0313 11:04:20.251132 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68ed803-45e2-42f1-99b1-33cf59b01d74" containerName="metrics-server" Mar 13 11:04:20.258303 master-0 kubenswrapper[33013]: I0313 11:04:20.251326 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="b68ed803-45e2-42f1-99b1-33cf59b01d74" containerName="metrics-server" Mar 13 11:04:20.271768 master-0 kubenswrapper[33013]: I0313 11:04:20.266222 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="31da838d-aec3-43f1-8eb5-69b65aa77cf6" containerName="installer" Mar 13 11:04:20.271768 master-0 kubenswrapper[33013]: I0313 11:04:20.266923 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-75bbf545c6-v5b28"] Mar 13 11:04:20.271768 master-0 kubenswrapper[33013]: I0313 11:04:20.267898 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.271768 master-0 kubenswrapper[33013]: I0313 11:04:20.268343 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:20.271768 master-0 kubenswrapper[33013]: I0313 11:04:20.269085 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zqcf\" (UniqueName: \"kubernetes.io/projected/a4a079d7-e6d5-4622-87db-714e92f42458-kube-api-access-4zqcf\") pod \"sushy-emulator-6dd6777c94-dp5bz\" (UID: \"a4a079d7-e6d5-4622-87db-714e92f42458\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:20.271768 master-0 kubenswrapper[33013]: I0313 11:04:20.269128 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-trusted-ca-bundle\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.271768 master-0 kubenswrapper[33013]: I0313 11:04:20.269159 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9970752c-2c89-447e-a248-73504d39e4e6-console-oauth-config\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.271768 master-0 kubenswrapper[33013]: I0313 11:04:20.269213 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/a4a079d7-e6d5-4622-87db-714e92f42458-sushy-emulator-config\") pod \"sushy-emulator-6dd6777c94-dp5bz\" (UID: \"a4a079d7-e6d5-4622-87db-714e92f42458\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:20.271768 master-0 kubenswrapper[33013]: I0313 11:04:20.269236 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf7fd\" (UniqueName: \"kubernetes.io/projected/9970752c-2c89-447e-a248-73504d39e4e6-kube-api-access-qf7fd\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.271768 master-0 kubenswrapper[33013]: I0313 11:04:20.269280 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-service-ca\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.271768 master-0 kubenswrapper[33013]: I0313 11:04:20.269318 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-oauth-serving-cert\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.271768 master-0 kubenswrapper[33013]: I0313 11:04:20.269347 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9970752c-2c89-447e-a248-73504d39e4e6-console-serving-cert\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.271768 master-0 kubenswrapper[33013]: I0313 11:04:20.269374 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-console-config\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.271768 master-0 kubenswrapper[33013]: I0313 11:04:20.269410 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/a4a079d7-e6d5-4622-87db-714e92f42458-os-client-config\") pod \"sushy-emulator-6dd6777c94-dp5bz\" (UID: \"a4a079d7-e6d5-4622-87db-714e92f42458\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:20.275519 master-0 kubenswrapper[33013]: I0313 11:04:20.274963 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-6dd6777c94-dp5bz"] Mar 13 11:04:20.276254 master-0 kubenswrapper[33013]: I0313 11:04:20.276212 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Mar 13 11:04:20.276332 master-0 kubenswrapper[33013]: I0313 11:04:20.276256 33013 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Mar 13 11:04:20.276382 master-0 kubenswrapper[33013]: I0313 11:04:20.276211 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Mar 13 11:04:20.276424 master-0 kubenswrapper[33013]: I0313 11:04:20.276259 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Mar 13 11:04:20.305664 master-0 kubenswrapper[33013]: I0313 11:04:20.304855 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75bbf545c6-v5b28"] Mar 13 11:04:20.373715 master-0 kubenswrapper[33013]: I0313 11:04:20.373551 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-service-ca\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.373715 master-0 kubenswrapper[33013]: I0313 11:04:20.373668 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-oauth-serving-cert\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.373715 master-0 kubenswrapper[33013]: I0313 11:04:20.373698 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9970752c-2c89-447e-a248-73504d39e4e6-console-serving-cert\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.373715 master-0 kubenswrapper[33013]: I0313 11:04:20.373720 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-console-config\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.373715 master-0 kubenswrapper[33013]: I0313 11:04:20.373738 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/a4a079d7-e6d5-4622-87db-714e92f42458-os-client-config\") pod \"sushy-emulator-6dd6777c94-dp5bz\" (UID: \"a4a079d7-e6d5-4622-87db-714e92f42458\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:20.374833 master-0 kubenswrapper[33013]: I0313 11:04:20.373775 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zqcf\" (UniqueName: \"kubernetes.io/projected/a4a079d7-e6d5-4622-87db-714e92f42458-kube-api-access-4zqcf\") pod \"sushy-emulator-6dd6777c94-dp5bz\" (UID: \"a4a079d7-e6d5-4622-87db-714e92f42458\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:20.374833 master-0 kubenswrapper[33013]: I0313 11:04:20.373796 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-trusted-ca-bundle\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.374833 master-0 kubenswrapper[33013]: I0313 11:04:20.373825 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9970752c-2c89-447e-a248-73504d39e4e6-console-oauth-config\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.374833 master-0 kubenswrapper[33013]: I0313 11:04:20.374826 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-console-config\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.375007 master-0 kubenswrapper[33013]: I0313 11:04:20.374885 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/a4a079d7-e6d5-4622-87db-714e92f42458-sushy-emulator-config\") pod \"sushy-emulator-6dd6777c94-dp5bz\" (UID: \"a4a079d7-e6d5-4622-87db-714e92f42458\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:20.375007 master-0 kubenswrapper[33013]: I0313 11:04:20.374922 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf7fd\" (UniqueName: \"kubernetes.io/projected/9970752c-2c89-447e-a248-73504d39e4e6-kube-api-access-qf7fd\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.380664 master-0 kubenswrapper[33013]: I0313 11:04:20.376031 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/a4a079d7-e6d5-4622-87db-714e92f42458-sushy-emulator-config\") pod \"sushy-emulator-6dd6777c94-dp5bz\" (UID: \"a4a079d7-e6d5-4622-87db-714e92f42458\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:20.380664 master-0 kubenswrapper[33013]: I0313 11:04:20.378569 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-trusted-ca-bundle\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.380664 master-0 kubenswrapper[33013]: I0313 11:04:20.379472 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-oauth-serving-cert\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.380664 master-0 kubenswrapper[33013]: I0313 11:04:20.379534 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-service-ca\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.381772 master-0 kubenswrapper[33013]: I0313 11:04:20.381716 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9970752c-2c89-447e-a248-73504d39e4e6-console-oauth-config\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.387607 master-0 kubenswrapper[33013]: I0313 11:04:20.386155 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/a4a079d7-e6d5-4622-87db-714e92f42458-os-client-config\") pod \"sushy-emulator-6dd6777c94-dp5bz\" (UID: \"a4a079d7-e6d5-4622-87db-714e92f42458\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:20.387607 master-0 kubenswrapper[33013]: I0313 11:04:20.387293 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9970752c-2c89-447e-a248-73504d39e4e6-console-serving-cert\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.402422 master-0 kubenswrapper[33013]: I0313 11:04:20.402372 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zqcf\" (UniqueName: \"kubernetes.io/projected/a4a079d7-e6d5-4622-87db-714e92f42458-kube-api-access-4zqcf\") pod \"sushy-emulator-6dd6777c94-dp5bz\" (UID: \"a4a079d7-e6d5-4622-87db-714e92f42458\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:20.404677 master-0 kubenswrapper[33013]: I0313 11:04:20.404529 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf7fd\" (UniqueName: \"kubernetes.io/projected/9970752c-2c89-447e-a248-73504d39e4e6-kube-api-access-qf7fd\") pod \"console-75bbf545c6-v5b28\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.632793 master-0 kubenswrapper[33013]: I0313 11:04:20.632728 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:20.659232 master-0 kubenswrapper[33013]: I0313 11:04:20.658857 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:21.074146 master-0 kubenswrapper[33013]: I0313 11:04:21.074081 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75bbf545c6-v5b28"] Mar 13 11:04:21.074701 master-0 kubenswrapper[33013]: W0313 11:04:21.074655 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9970752c_2c89_447e_a248_73504d39e4e6.slice/crio-c9c962429d52f4e09577f51cd3f80bde7c23d67053505d501606b7910e2c038c WatchSource:0}: Error finding container c9c962429d52f4e09577f51cd3f80bde7c23d67053505d501606b7910e2c038c: Status 404 returned error can't find the container with id c9c962429d52f4e09577f51cd3f80bde7c23d67053505d501606b7910e2c038c Mar 13 11:04:21.111243 master-0 kubenswrapper[33013]: I0313 11:04:21.111057 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5ad616f4656fbce37c87f129f788ab06/kube-controller-manager/0.log" Mar 13 11:04:21.111243 master-0 kubenswrapper[33013]: I0313 11:04:21.111149 33013 generic.go:334] "Generic (PLEG): container finished" podID="5ad616f4656fbce37c87f129f788ab06" containerID="d15978ad368260f4970900195660ff329f9fae6bd88e7fd9c0e8ad7ebca05134" exitCode=137 Mar 13 11:04:21.111243 master-0 kubenswrapper[33013]: I0313 11:04:21.111233 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5ad616f4656fbce37c87f129f788ab06","Type":"ContainerDied","Data":"d15978ad368260f4970900195660ff329f9fae6bd88e7fd9c0e8ad7ebca05134"} Mar 13 11:04:21.117428 master-0 kubenswrapper[33013]: I0313 11:04:21.117353 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75bbf545c6-v5b28" event={"ID":"9970752c-2c89-447e-a248-73504d39e4e6","Type":"ContainerStarted","Data":"c9c962429d52f4e09577f51cd3f80bde7c23d67053505d501606b7910e2c038c"} Mar 13 11:04:21.147771 master-0 kubenswrapper[33013]: I0313 11:04:21.147671 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-6dd6777c94-dp5bz"] Mar 13 11:04:21.161451 master-0 kubenswrapper[33013]: W0313 11:04:21.161403 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4a079d7_e6d5_4622_87db_714e92f42458.slice/crio-eceef98118832f3c64205b08536b55d2c333db5459741df541ec269fa9b78489 WatchSource:0}: Error finding container eceef98118832f3c64205b08536b55d2c333db5459741df541ec269fa9b78489: Status 404 returned error can't find the container with id eceef98118832f3c64205b08536b55d2c333db5459741df541ec269fa9b78489 Mar 13 11:04:21.164549 master-0 kubenswrapper[33013]: I0313 11:04:21.164519 33013 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 11:04:22.127825 master-0 kubenswrapper[33013]: I0313 11:04:22.127749 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75bbf545c6-v5b28" event={"ID":"9970752c-2c89-447e-a248-73504d39e4e6","Type":"ContainerStarted","Data":"aeedc22f8d900911dbfcedcfed286c7c821421267a5d27439b32f8e6d653a5a3"} Mar 13 11:04:22.130345 master-0 kubenswrapper[33013]: I0313 11:04:22.130134 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" event={"ID":"a4a079d7-e6d5-4622-87db-714e92f42458","Type":"ContainerStarted","Data":"eceef98118832f3c64205b08536b55d2c333db5459741df541ec269fa9b78489"} Mar 13 11:04:22.137100 master-0 kubenswrapper[33013]: I0313 11:04:22.137036 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5ad616f4656fbce37c87f129f788ab06/kube-controller-manager/0.log" Mar 13 11:04:22.137315 master-0 kubenswrapper[33013]: I0313 11:04:22.137107 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5ad616f4656fbce37c87f129f788ab06","Type":"ContainerStarted","Data":"bccbfd9d79f7ad44f72bac727d78760efd6bd019df051323f2d3ef225fac618a"} Mar 13 11:04:22.184580 master-0 kubenswrapper[33013]: I0313 11:04:22.184499 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-75bbf545c6-v5b28" podStartSLOduration=2.184482439 podStartE2EDuration="2.184482439s" podCreationTimestamp="2026-03-13 11:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:04:22.149979774 +0000 UTC m=+445.625933113" watchObservedRunningTime="2026-03-13 11:04:22.184482439 +0000 UTC m=+445.660435788" Mar 13 11:04:28.182420 master-0 kubenswrapper[33013]: I0313 11:04:28.182324 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" event={"ID":"a4a079d7-e6d5-4622-87db-714e92f42458","Type":"ContainerStarted","Data":"294b64d4c59443bfa39ef5519bc2f20e9fb2cf31cf6fdcb7c2b886cb1577a014"} Mar 13 11:04:28.213621 master-0 kubenswrapper[33013]: I0313 11:04:28.213455 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" podStartSLOduration=1.586032653 podStartE2EDuration="8.213429696s" podCreationTimestamp="2026-03-13 11:04:20 +0000 UTC" firstStartedPulling="2026-03-13 11:04:21.164482634 +0000 UTC m=+444.640435983" lastFinishedPulling="2026-03-13 11:04:27.791879677 +0000 UTC m=+451.267833026" observedRunningTime="2026-03-13 11:04:28.207624042 +0000 UTC m=+451.683577391" watchObservedRunningTime="2026-03-13 11:04:28.213429696 +0000 UTC m=+451.689383035" Mar 13 11:04:30.633443 master-0 kubenswrapper[33013]: I0313 11:04:30.633323 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:30.633443 master-0 kubenswrapper[33013]: I0313 11:04:30.633439 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:30.639652 master-0 kubenswrapper[33013]: I0313 11:04:30.639556 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:30.660515 master-0 kubenswrapper[33013]: I0313 11:04:30.660413 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:30.660515 master-0 kubenswrapper[33013]: I0313 11:04:30.660489 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:30.671570 master-0 kubenswrapper[33013]: I0313 11:04:30.671469 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:30.767430 master-0 kubenswrapper[33013]: I0313 11:04:30.767315 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:04:30.767430 master-0 kubenswrapper[33013]: I0313 11:04:30.767398 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:04:30.772732 master-0 kubenswrapper[33013]: I0313 11:04:30.772697 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:04:31.213489 master-0 kubenswrapper[33013]: I0313 11:04:31.213228 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:04:31.214533 master-0 kubenswrapper[33013]: I0313 11:04:31.214479 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:04:31.214966 master-0 kubenswrapper[33013]: I0313 11:04:31.214911 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 11:04:38.388610 master-0 kubenswrapper[33013]: I0313 11:04:38.388137 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76bbbbbcd4-rgrm6"] Mar 13 11:04:55.428521 master-0 kubenswrapper[33013]: I0313 11:04:55.428441 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-844c676c59-j9g8j"] Mar 13 11:04:55.429887 master-0 kubenswrapper[33013]: I0313 11:04:55.429856 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-844c676c59-j9g8j" Mar 13 11:04:55.441088 master-0 kubenswrapper[33013]: I0313 11:04:55.440994 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-844c676c59-j9g8j"] Mar 13 11:04:55.492522 master-0 kubenswrapper[33013]: I0313 11:04:55.492438 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkfjg\" (UniqueName: \"kubernetes.io/projected/6208f879-cc35-4aa3-b44e-1ffbda4eb0bf-kube-api-access-gkfjg\") pod \"nova-console-poller-844c676c59-j9g8j\" (UID: \"6208f879-cc35-4aa3-b44e-1ffbda4eb0bf\") " pod="sushy-emulator/nova-console-poller-844c676c59-j9g8j" Mar 13 11:04:55.492787 master-0 kubenswrapper[33013]: I0313 11:04:55.492548 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/6208f879-cc35-4aa3-b44e-1ffbda4eb0bf-os-client-config\") pod \"nova-console-poller-844c676c59-j9g8j\" (UID: \"6208f879-cc35-4aa3-b44e-1ffbda4eb0bf\") " pod="sushy-emulator/nova-console-poller-844c676c59-j9g8j" Mar 13 11:04:55.595040 master-0 kubenswrapper[33013]: I0313 11:04:55.594960 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/6208f879-cc35-4aa3-b44e-1ffbda4eb0bf-os-client-config\") pod \"nova-console-poller-844c676c59-j9g8j\" (UID: \"6208f879-cc35-4aa3-b44e-1ffbda4eb0bf\") " pod="sushy-emulator/nova-console-poller-844c676c59-j9g8j" Mar 13 11:04:55.595290 master-0 kubenswrapper[33013]: I0313 11:04:55.595089 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkfjg\" (UniqueName: \"kubernetes.io/projected/6208f879-cc35-4aa3-b44e-1ffbda4eb0bf-kube-api-access-gkfjg\") pod \"nova-console-poller-844c676c59-j9g8j\" (UID: \"6208f879-cc35-4aa3-b44e-1ffbda4eb0bf\") " pod="sushy-emulator/nova-console-poller-844c676c59-j9g8j" Mar 13 11:04:55.599295 master-0 kubenswrapper[33013]: I0313 11:04:55.599181 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/6208f879-cc35-4aa3-b44e-1ffbda4eb0bf-os-client-config\") pod \"nova-console-poller-844c676c59-j9g8j\" (UID: \"6208f879-cc35-4aa3-b44e-1ffbda4eb0bf\") " pod="sushy-emulator/nova-console-poller-844c676c59-j9g8j" Mar 13 11:04:55.614761 master-0 kubenswrapper[33013]: I0313 11:04:55.614713 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkfjg\" (UniqueName: \"kubernetes.io/projected/6208f879-cc35-4aa3-b44e-1ffbda4eb0bf-kube-api-access-gkfjg\") pod \"nova-console-poller-844c676c59-j9g8j\" (UID: \"6208f879-cc35-4aa3-b44e-1ffbda4eb0bf\") " pod="sushy-emulator/nova-console-poller-844c676c59-j9g8j" Mar 13 11:04:55.758146 master-0 kubenswrapper[33013]: I0313 11:04:55.758022 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-844c676c59-j9g8j" Mar 13 11:04:56.149890 master-0 kubenswrapper[33013]: I0313 11:04:56.149786 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-844c676c59-j9g8j"] Mar 13 11:04:56.151698 master-0 kubenswrapper[33013]: W0313 11:04:56.151646 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6208f879_cc35_4aa3_b44e_1ffbda4eb0bf.slice/crio-e7587970bf6c6bcc40f6745e2989d162cd70d2d41061a2ddfae627913a4eb51e WatchSource:0}: Error finding container e7587970bf6c6bcc40f6745e2989d162cd70d2d41061a2ddfae627913a4eb51e: Status 404 returned error can't find the container with id e7587970bf6c6bcc40f6745e2989d162cd70d2d41061a2ddfae627913a4eb51e Mar 13 11:04:56.538342 master-0 kubenswrapper[33013]: I0313 11:04:56.538171 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-844c676c59-j9g8j" event={"ID":"6208f879-cc35-4aa3-b44e-1ffbda4eb0bf","Type":"ContainerStarted","Data":"e7587970bf6c6bcc40f6745e2989d162cd70d2d41061a2ddfae627913a4eb51e"} Mar 13 11:05:02.592438 master-0 kubenswrapper[33013]: I0313 11:05:02.592367 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-844c676c59-j9g8j" event={"ID":"6208f879-cc35-4aa3-b44e-1ffbda4eb0bf","Type":"ContainerStarted","Data":"caa13179405454388c5bb97f8c13588dd8e62b5a2c73e4f99db7c048be4896c3"} Mar 13 11:05:03.437129 master-0 kubenswrapper[33013]: I0313 11:05:03.436969 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-76bbbbbcd4-rgrm6" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerName="console" containerID="cri-o://90fefd75f56592057f0c06f82f4c8e3d37a50635dd811329b11289fe0259e993" gracePeriod=15 Mar 13 11:05:03.616168 master-0 kubenswrapper[33013]: I0313 11:05:03.616105 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76bbbbbcd4-rgrm6_496bd468-60d6-40a1-a2ba-682e3f95a36a/console/0.log" Mar 13 11:05:03.616736 master-0 kubenswrapper[33013]: I0313 11:05:03.616193 33013 generic.go:334] "Generic (PLEG): container finished" podID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerID="90fefd75f56592057f0c06f82f4c8e3d37a50635dd811329b11289fe0259e993" exitCode=2 Mar 13 11:05:03.616736 master-0 kubenswrapper[33013]: I0313 11:05:03.616314 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76bbbbbcd4-rgrm6" event={"ID":"496bd468-60d6-40a1-a2ba-682e3f95a36a","Type":"ContainerDied","Data":"90fefd75f56592057f0c06f82f4c8e3d37a50635dd811329b11289fe0259e993"} Mar 13 11:05:03.626577 master-0 kubenswrapper[33013]: I0313 11:05:03.626489 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-844c676c59-j9g8j" event={"ID":"6208f879-cc35-4aa3-b44e-1ffbda4eb0bf","Type":"ContainerStarted","Data":"41f9ec0247eec4aa83ed0730520a725658e0827f091d4b3351cfccd7eab689c1"} Mar 13 11:05:03.719357 master-0 kubenswrapper[33013]: I0313 11:05:03.719234 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-844c676c59-j9g8j" podStartSLOduration=1.816125131 podStartE2EDuration="8.719200782s" podCreationTimestamp="2026-03-13 11:04:55 +0000 UTC" firstStartedPulling="2026-03-13 11:04:56.153961695 +0000 UTC m=+479.629915044" lastFinishedPulling="2026-03-13 11:05:03.057037346 +0000 UTC m=+486.532990695" observedRunningTime="2026-03-13 11:05:03.71807428 +0000 UTC m=+487.194027629" watchObservedRunningTime="2026-03-13 11:05:03.719200782 +0000 UTC m=+487.195154151" Mar 13 11:05:03.937919 master-0 kubenswrapper[33013]: I0313 11:05:03.937866 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76bbbbbcd4-rgrm6_496bd468-60d6-40a1-a2ba-682e3f95a36a/console/0.log" Mar 13 11:05:03.938213 master-0 kubenswrapper[33013]: I0313 11:05:03.937952 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 11:05:04.040225 master-0 kubenswrapper[33013]: I0313 11:05:04.039996 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-oauth-config\") pod \"496bd468-60d6-40a1-a2ba-682e3f95a36a\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " Mar 13 11:05:04.040225 master-0 kubenswrapper[33013]: I0313 11:05:04.040154 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-oauth-serving-cert\") pod \"496bd468-60d6-40a1-a2ba-682e3f95a36a\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " Mar 13 11:05:04.040225 master-0 kubenswrapper[33013]: I0313 11:05:04.040235 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-trusted-ca-bundle\") pod \"496bd468-60d6-40a1-a2ba-682e3f95a36a\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " Mar 13 11:05:04.040865 master-0 kubenswrapper[33013]: I0313 11:05:04.040318 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks5mg\" (UniqueName: \"kubernetes.io/projected/496bd468-60d6-40a1-a2ba-682e3f95a36a-kube-api-access-ks5mg\") pod \"496bd468-60d6-40a1-a2ba-682e3f95a36a\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " Mar 13 11:05:04.040865 master-0 kubenswrapper[33013]: I0313 11:05:04.040373 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-serving-cert\") pod \"496bd468-60d6-40a1-a2ba-682e3f95a36a\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " Mar 13 11:05:04.040865 master-0 kubenswrapper[33013]: I0313 11:05:04.040428 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-service-ca\") pod \"496bd468-60d6-40a1-a2ba-682e3f95a36a\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " Mar 13 11:05:04.040865 master-0 kubenswrapper[33013]: I0313 11:05:04.040464 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-config\") pod \"496bd468-60d6-40a1-a2ba-682e3f95a36a\" (UID: \"496bd468-60d6-40a1-a2ba-682e3f95a36a\") " Mar 13 11:05:04.042441 master-0 kubenswrapper[33013]: I0313 11:05:04.042390 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-config" (OuterVolumeSpecName: "console-config") pod "496bd468-60d6-40a1-a2ba-682e3f95a36a" (UID: "496bd468-60d6-40a1-a2ba-682e3f95a36a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:05:04.042441 master-0 kubenswrapper[33013]: I0313 11:05:04.042409 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-service-ca" (OuterVolumeSpecName: "service-ca") pod "496bd468-60d6-40a1-a2ba-682e3f95a36a" (UID: "496bd468-60d6-40a1-a2ba-682e3f95a36a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:05:04.042838 master-0 kubenswrapper[33013]: I0313 11:05:04.042806 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "496bd468-60d6-40a1-a2ba-682e3f95a36a" (UID: "496bd468-60d6-40a1-a2ba-682e3f95a36a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:05:04.042939 master-0 kubenswrapper[33013]: I0313 11:05:04.042913 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "496bd468-60d6-40a1-a2ba-682e3f95a36a" (UID: "496bd468-60d6-40a1-a2ba-682e3f95a36a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:05:04.045433 master-0 kubenswrapper[33013]: I0313 11:05:04.045355 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "496bd468-60d6-40a1-a2ba-682e3f95a36a" (UID: "496bd468-60d6-40a1-a2ba-682e3f95a36a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:05:04.046652 master-0 kubenswrapper[33013]: I0313 11:05:04.046612 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496bd468-60d6-40a1-a2ba-682e3f95a36a-kube-api-access-ks5mg" (OuterVolumeSpecName: "kube-api-access-ks5mg") pod "496bd468-60d6-40a1-a2ba-682e3f95a36a" (UID: "496bd468-60d6-40a1-a2ba-682e3f95a36a"). InnerVolumeSpecName "kube-api-access-ks5mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:05:04.047079 master-0 kubenswrapper[33013]: I0313 11:05:04.047033 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "496bd468-60d6-40a1-a2ba-682e3f95a36a" (UID: "496bd468-60d6-40a1-a2ba-682e3f95a36a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:05:04.142412 master-0 kubenswrapper[33013]: I0313 11:05:04.142319 33013 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 11:05:04.142412 master-0 kubenswrapper[33013]: I0313 11:05:04.142385 33013 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:05:04.142412 master-0 kubenswrapper[33013]: I0313 11:05:04.142399 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks5mg\" (UniqueName: \"kubernetes.io/projected/496bd468-60d6-40a1-a2ba-682e3f95a36a-kube-api-access-ks5mg\") on node \"master-0\" DevicePath \"\"" Mar 13 11:05:04.142412 master-0 kubenswrapper[33013]: I0313 11:05:04.142413 33013 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 11:05:04.142412 master-0 kubenswrapper[33013]: I0313 11:05:04.142429 33013 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 11:05:04.142412 master-0 kubenswrapper[33013]: I0313 11:05:04.142442 33013 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:05:04.142412 master-0 kubenswrapper[33013]: I0313 11:05:04.142454 33013 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/496bd468-60d6-40a1-a2ba-682e3f95a36a-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:05:04.638290 master-0 kubenswrapper[33013]: I0313 11:05:04.638245 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76bbbbbcd4-rgrm6_496bd468-60d6-40a1-a2ba-682e3f95a36a/console/0.log" Mar 13 11:05:04.638935 master-0 kubenswrapper[33013]: I0313 11:05:04.638386 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76bbbbbcd4-rgrm6" Mar 13 11:05:04.638935 master-0 kubenswrapper[33013]: I0313 11:05:04.638379 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76bbbbbcd4-rgrm6" event={"ID":"496bd468-60d6-40a1-a2ba-682e3f95a36a","Type":"ContainerDied","Data":"04c63e8c774408ba0b4e1db70a6836fbfbf2d6870623d9f8ce48872f54215040"} Mar 13 11:05:04.638935 master-0 kubenswrapper[33013]: I0313 11:05:04.638442 33013 scope.go:117] "RemoveContainer" containerID="90fefd75f56592057f0c06f82f4c8e3d37a50635dd811329b11289fe0259e993" Mar 13 11:05:04.683619 master-0 kubenswrapper[33013]: I0313 11:05:04.683495 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76bbbbbcd4-rgrm6"] Mar 13 11:05:04.688944 master-0 kubenswrapper[33013]: I0313 11:05:04.688870 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-76bbbbbcd4-rgrm6"] Mar 13 11:05:04.722607 master-0 kubenswrapper[33013]: I0313 11:05:04.722531 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" path="/var/lib/kubelet/pods/496bd468-60d6-40a1-a2ba-682e3f95a36a/volumes" Mar 13 11:05:27.911863 master-0 kubenswrapper[33013]: I0313 11:05:27.911779 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-54c96687f9-pqxqq"] Mar 13 11:05:27.912765 master-0 kubenswrapper[33013]: E0313 11:05:27.912197 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerName="console" Mar 13 11:05:27.912765 master-0 kubenswrapper[33013]: I0313 11:05:27.912216 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerName="console" Mar 13 11:05:27.912765 master-0 kubenswrapper[33013]: I0313 11:05:27.912488 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="496bd468-60d6-40a1-a2ba-682e3f95a36a" containerName="console" Mar 13 11:05:27.913484 master-0 kubenswrapper[33013]: I0313 11:05:27.913443 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" Mar 13 11:05:27.929380 master-0 kubenswrapper[33013]: I0313 11:05:27.929335 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/f2fc0951-4f0a-4633-874b-48bda6ac660f-os-client-config\") pod \"nova-console-recorder-54c96687f9-pqxqq\" (UID: \"f2fc0951-4f0a-4633-874b-48bda6ac660f\") " pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" Mar 13 11:05:27.968418 master-0 kubenswrapper[33013]: I0313 11:05:27.968362 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-54c96687f9-pqxqq"] Mar 13 11:05:28.030755 master-0 kubenswrapper[33013]: I0313 11:05:28.030679 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/f2fc0951-4f0a-4633-874b-48bda6ac660f-nova-console-recordings-pv\") pod \"nova-console-recorder-54c96687f9-pqxqq\" (UID: \"f2fc0951-4f0a-4633-874b-48bda6ac660f\") " pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" Mar 13 11:05:28.030755 master-0 kubenswrapper[33013]: I0313 11:05:28.030751 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/f2fc0951-4f0a-4633-874b-48bda6ac660f-os-client-config\") pod \"nova-console-recorder-54c96687f9-pqxqq\" (UID: \"f2fc0951-4f0a-4633-874b-48bda6ac660f\") " pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" Mar 13 11:05:28.031168 master-0 kubenswrapper[33013]: I0313 11:05:28.030892 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn8c6\" (UniqueName: \"kubernetes.io/projected/f2fc0951-4f0a-4633-874b-48bda6ac660f-kube-api-access-wn8c6\") pod \"nova-console-recorder-54c96687f9-pqxqq\" (UID: \"f2fc0951-4f0a-4633-874b-48bda6ac660f\") " pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" Mar 13 11:05:28.035217 master-0 kubenswrapper[33013]: I0313 11:05:28.035169 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/f2fc0951-4f0a-4633-874b-48bda6ac660f-os-client-config\") pod \"nova-console-recorder-54c96687f9-pqxqq\" (UID: \"f2fc0951-4f0a-4633-874b-48bda6ac660f\") " pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" Mar 13 11:05:28.132722 master-0 kubenswrapper[33013]: I0313 11:05:28.132648 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8c6\" (UniqueName: \"kubernetes.io/projected/f2fc0951-4f0a-4633-874b-48bda6ac660f-kube-api-access-wn8c6\") pod \"nova-console-recorder-54c96687f9-pqxqq\" (UID: \"f2fc0951-4f0a-4633-874b-48bda6ac660f\") " pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" Mar 13 11:05:28.133099 master-0 kubenswrapper[33013]: I0313 11:05:28.133080 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/f2fc0951-4f0a-4633-874b-48bda6ac660f-nova-console-recordings-pv\") pod \"nova-console-recorder-54c96687f9-pqxqq\" (UID: \"f2fc0951-4f0a-4633-874b-48bda6ac660f\") " pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" Mar 13 11:05:28.150556 master-0 kubenswrapper[33013]: I0313 11:05:28.150483 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8c6\" (UniqueName: \"kubernetes.io/projected/f2fc0951-4f0a-4633-874b-48bda6ac660f-kube-api-access-wn8c6\") pod \"nova-console-recorder-54c96687f9-pqxqq\" (UID: \"f2fc0951-4f0a-4633-874b-48bda6ac660f\") " pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" Mar 13 11:05:28.787136 master-0 kubenswrapper[33013]: I0313 11:05:28.787072 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/f2fc0951-4f0a-4633-874b-48bda6ac660f-nova-console-recordings-pv\") pod \"nova-console-recorder-54c96687f9-pqxqq\" (UID: \"f2fc0951-4f0a-4633-874b-48bda6ac660f\") " pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" Mar 13 11:05:28.831237 master-0 kubenswrapper[33013]: I0313 11:05:28.831130 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" Mar 13 11:05:29.220877 master-0 kubenswrapper[33013]: I0313 11:05:29.220793 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-54c96687f9-pqxqq"] Mar 13 11:05:29.225533 master-0 kubenswrapper[33013]: W0313 11:05:29.225457 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2fc0951_4f0a_4633_874b_48bda6ac660f.slice/crio-f563370c6f47c476d4bf8d2dcfffe0716d0e5f2549c512b2c6d22f38c1ef3a14 WatchSource:0}: Error finding container f563370c6f47c476d4bf8d2dcfffe0716d0e5f2549c512b2c6d22f38c1ef3a14: Status 404 returned error can't find the container with id f563370c6f47c476d4bf8d2dcfffe0716d0e5f2549c512b2c6d22f38c1ef3a14 Mar 13 11:05:29.857723 master-0 kubenswrapper[33013]: I0313 11:05:29.857654 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" event={"ID":"f2fc0951-4f0a-4633-874b-48bda6ac660f","Type":"ContainerStarted","Data":"f563370c6f47c476d4bf8d2dcfffe0716d0e5f2549c512b2c6d22f38c1ef3a14"} Mar 13 11:05:38.918703 master-0 kubenswrapper[33013]: I0313 11:05:38.918632 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" event={"ID":"f2fc0951-4f0a-4633-874b-48bda6ac660f","Type":"ContainerStarted","Data":"162a906ca1dcbb3da12c08d364d1cc892c7c82d1cf62ab42c190184569110127"} Mar 13 11:05:38.918703 master-0 kubenswrapper[33013]: I0313 11:05:38.918701 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" event={"ID":"f2fc0951-4f0a-4633-874b-48bda6ac660f","Type":"ContainerStarted","Data":"40a7b53a436605a2f009ae6cf9a9048bd3c4b1777ced1c64fa1041879ba94e52"} Mar 13 11:05:38.957262 master-0 kubenswrapper[33013]: I0313 11:05:38.957137 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-54c96687f9-pqxqq" podStartSLOduration=2.760356741 podStartE2EDuration="11.957116759s" podCreationTimestamp="2026-03-13 11:05:27 +0000 UTC" firstStartedPulling="2026-03-13 11:05:29.229215687 +0000 UTC m=+512.705169036" lastFinishedPulling="2026-03-13 11:05:38.425975705 +0000 UTC m=+521.901929054" observedRunningTime="2026-03-13 11:05:38.946830842 +0000 UTC m=+522.422784191" watchObservedRunningTime="2026-03-13 11:05:38.957116759 +0000 UTC m=+522.433070108" Mar 13 11:06:09.528078 master-0 kubenswrapper[33013]: I0313 11:06:09.528003 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh"] Mar 13 11:06:09.530997 master-0 kubenswrapper[33013]: I0313 11:06:09.530953 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" Mar 13 11:06:09.532966 master-0 kubenswrapper[33013]: I0313 11:06:09.532900 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vf5mq" Mar 13 11:06:09.542836 master-0 kubenswrapper[33013]: I0313 11:06:09.542783 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh"] Mar 13 11:06:09.677693 master-0 kubenswrapper[33013]: I0313 11:06:09.677577 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7phrf\" (UniqueName: \"kubernetes.io/projected/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-kube-api-access-7phrf\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh\" (UID: \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" Mar 13 11:06:09.678052 master-0 kubenswrapper[33013]: I0313 11:06:09.677875 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh\" (UID: \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" Mar 13 11:06:09.678052 master-0 kubenswrapper[33013]: I0313 11:06:09.677943 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh\" (UID: \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" Mar 13 11:06:09.779883 master-0 kubenswrapper[33013]: I0313 11:06:09.779626 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh\" (UID: \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" Mar 13 11:06:09.779883 master-0 kubenswrapper[33013]: I0313 11:06:09.779813 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh\" (UID: \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" Mar 13 11:06:09.780231 master-0 kubenswrapper[33013]: I0313 11:06:09.780157 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh\" (UID: \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" Mar 13 11:06:09.780804 master-0 kubenswrapper[33013]: I0313 11:06:09.780480 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7phrf\" (UniqueName: \"kubernetes.io/projected/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-kube-api-access-7phrf\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh\" (UID: \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" Mar 13 11:06:09.780804 master-0 kubenswrapper[33013]: I0313 11:06:09.780571 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh\" (UID: \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" Mar 13 11:06:09.799515 master-0 kubenswrapper[33013]: I0313 11:06:09.799081 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7phrf\" (UniqueName: \"kubernetes.io/projected/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-kube-api-access-7phrf\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh\" (UID: \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" Mar 13 11:06:09.848687 master-0 kubenswrapper[33013]: I0313 11:06:09.848486 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" Mar 13 11:06:10.274532 master-0 kubenswrapper[33013]: I0313 11:06:10.274024 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh"] Mar 13 11:06:11.155693 master-0 kubenswrapper[33013]: I0313 11:06:11.154829 33013 generic.go:334] "Generic (PLEG): container finished" podID="cacb5c33-9ed4-46c4-bb5f-377bfc66f59d" containerID="6df2c89a02df27beba728789f01ccf6c0981643873fd69e4a2b47499ddeaf499" exitCode=0 Mar 13 11:06:11.155693 master-0 kubenswrapper[33013]: I0313 11:06:11.154918 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" event={"ID":"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d","Type":"ContainerDied","Data":"6df2c89a02df27beba728789f01ccf6c0981643873fd69e4a2b47499ddeaf499"} Mar 13 11:06:11.155693 master-0 kubenswrapper[33013]: I0313 11:06:11.155690 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" event={"ID":"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d","Type":"ContainerStarted","Data":"dcf05879aab717380b32091ce01e406df74d7973f6f8c2135ea3d5ce577bb820"} Mar 13 11:06:13.181719 master-0 kubenswrapper[33013]: I0313 11:06:13.181639 33013 generic.go:334] "Generic (PLEG): container finished" podID="cacb5c33-9ed4-46c4-bb5f-377bfc66f59d" containerID="b7906b33f861056fc690c834274141d4d7fd192cd267fbdc22894e4519e7b591" exitCode=0 Mar 13 11:06:13.181719 master-0 kubenswrapper[33013]: I0313 11:06:13.181718 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" event={"ID":"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d","Type":"ContainerDied","Data":"b7906b33f861056fc690c834274141d4d7fd192cd267fbdc22894e4519e7b591"} Mar 13 11:06:14.193106 master-0 kubenswrapper[33013]: I0313 11:06:14.193016 33013 generic.go:334] "Generic (PLEG): container finished" podID="cacb5c33-9ed4-46c4-bb5f-377bfc66f59d" containerID="05084d6700ede20c4cc9bf28634c0500e3f5f508499856fb95d279a1231e147c" exitCode=0 Mar 13 11:06:14.193106 master-0 kubenswrapper[33013]: I0313 11:06:14.193081 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" event={"ID":"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d","Type":"ContainerDied","Data":"05084d6700ede20c4cc9bf28634c0500e3f5f508499856fb95d279a1231e147c"} Mar 13 11:06:15.485312 master-0 kubenswrapper[33013]: I0313 11:06:15.485214 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" Mar 13 11:06:15.672250 master-0 kubenswrapper[33013]: I0313 11:06:15.672180 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7phrf\" (UniqueName: \"kubernetes.io/projected/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-kube-api-access-7phrf\") pod \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\" (UID: \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\") " Mar 13 11:06:15.672489 master-0 kubenswrapper[33013]: I0313 11:06:15.672357 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-bundle\") pod \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\" (UID: \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\") " Mar 13 11:06:15.672489 master-0 kubenswrapper[33013]: I0313 11:06:15.672435 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-util\") pod \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\" (UID: \"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d\") " Mar 13 11:06:15.673544 master-0 kubenswrapper[33013]: I0313 11:06:15.673452 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-bundle" (OuterVolumeSpecName: "bundle") pod "cacb5c33-9ed4-46c4-bb5f-377bfc66f59d" (UID: "cacb5c33-9ed4-46c4-bb5f-377bfc66f59d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:06:15.675445 master-0 kubenswrapper[33013]: I0313 11:06:15.675326 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-kube-api-access-7phrf" (OuterVolumeSpecName: "kube-api-access-7phrf") pod "cacb5c33-9ed4-46c4-bb5f-377bfc66f59d" (UID: "cacb5c33-9ed4-46c4-bb5f-377bfc66f59d"). InnerVolumeSpecName "kube-api-access-7phrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:06:15.688680 master-0 kubenswrapper[33013]: I0313 11:06:15.688595 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-util" (OuterVolumeSpecName: "util") pod "cacb5c33-9ed4-46c4-bb5f-377bfc66f59d" (UID: "cacb5c33-9ed4-46c4-bb5f-377bfc66f59d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:06:15.774580 master-0 kubenswrapper[33013]: I0313 11:06:15.774406 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7phrf\" (UniqueName: \"kubernetes.io/projected/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-kube-api-access-7phrf\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:15.774580 master-0 kubenswrapper[33013]: I0313 11:06:15.774474 33013 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:15.774580 master-0 kubenswrapper[33013]: I0313 11:06:15.774487 33013 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cacb5c33-9ed4-46c4-bb5f-377bfc66f59d-util\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:16.210986 master-0 kubenswrapper[33013]: I0313 11:06:16.210922 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" event={"ID":"cacb5c33-9ed4-46c4-bb5f-377bfc66f59d","Type":"ContainerDied","Data":"dcf05879aab717380b32091ce01e406df74d7973f6f8c2135ea3d5ce577bb820"} Mar 13 11:06:16.210986 master-0 kubenswrapper[33013]: I0313 11:06:16.210967 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcf05879aab717380b32091ce01e406df74d7973f6f8c2135ea3d5ce577bb820" Mar 13 11:06:16.210986 master-0 kubenswrapper[33013]: I0313 11:06:16.210971 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d425hnh" Mar 13 11:06:24.892995 master-0 kubenswrapper[33013]: I0313 11:06:24.892921 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-7bf95d86d4-8x5w9"] Mar 13 11:06:24.893747 master-0 kubenswrapper[33013]: E0313 11:06:24.893309 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacb5c33-9ed4-46c4-bb5f-377bfc66f59d" containerName="extract" Mar 13 11:06:24.893747 master-0 kubenswrapper[33013]: I0313 11:06:24.893328 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacb5c33-9ed4-46c4-bb5f-377bfc66f59d" containerName="extract" Mar 13 11:06:24.893747 master-0 kubenswrapper[33013]: E0313 11:06:24.893383 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacb5c33-9ed4-46c4-bb5f-377bfc66f59d" containerName="pull" Mar 13 11:06:24.893747 master-0 kubenswrapper[33013]: I0313 11:06:24.893397 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacb5c33-9ed4-46c4-bb5f-377bfc66f59d" containerName="pull" Mar 13 11:06:24.893747 master-0 kubenswrapper[33013]: E0313 11:06:24.893423 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacb5c33-9ed4-46c4-bb5f-377bfc66f59d" containerName="util" Mar 13 11:06:24.893747 master-0 kubenswrapper[33013]: I0313 11:06:24.893432 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacb5c33-9ed4-46c4-bb5f-377bfc66f59d" containerName="util" Mar 13 11:06:24.893747 master-0 kubenswrapper[33013]: I0313 11:06:24.893644 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="cacb5c33-9ed4-46c4-bb5f-377bfc66f59d" containerName="extract" Mar 13 11:06:24.894294 master-0 kubenswrapper[33013]: I0313 11:06:24.894261 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:24.896137 master-0 kubenswrapper[33013]: I0313 11:06:24.896107 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 13 11:06:24.896244 master-0 kubenswrapper[33013]: I0313 11:06:24.896116 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 13 11:06:24.896553 master-0 kubenswrapper[33013]: I0313 11:06:24.896529 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 13 11:06:24.896921 master-0 kubenswrapper[33013]: I0313 11:06:24.896899 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 13 11:06:24.897934 master-0 kubenswrapper[33013]: I0313 11:06:24.897917 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 13 11:06:24.912056 master-0 kubenswrapper[33013]: I0313 11:06:24.912003 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b7fe3223-9556-404e-9a74-250e7d186f5c-socket-dir\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:24.912238 master-0 kubenswrapper[33013]: I0313 11:06:24.912087 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7fe3223-9556-404e-9a74-250e7d186f5c-apiservice-cert\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:24.912238 master-0 kubenswrapper[33013]: I0313 11:06:24.912108 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7fe3223-9556-404e-9a74-250e7d186f5c-webhook-cert\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:24.912238 master-0 kubenswrapper[33013]: I0313 11:06:24.912128 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7fe3223-9556-404e-9a74-250e7d186f5c-metrics-cert\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:24.912238 master-0 kubenswrapper[33013]: I0313 11:06:24.912181 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pck5\" (UniqueName: \"kubernetes.io/projected/b7fe3223-9556-404e-9a74-250e7d186f5c-kube-api-access-4pck5\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:24.937762 master-0 kubenswrapper[33013]: I0313 11:06:24.937693 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7bf95d86d4-8x5w9"] Mar 13 11:06:25.014022 master-0 kubenswrapper[33013]: I0313 11:06:25.013926 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7fe3223-9556-404e-9a74-250e7d186f5c-apiservice-cert\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:25.014022 master-0 kubenswrapper[33013]: I0313 11:06:25.013999 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7fe3223-9556-404e-9a74-250e7d186f5c-webhook-cert\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:25.014022 master-0 kubenswrapper[33013]: I0313 11:06:25.014032 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7fe3223-9556-404e-9a74-250e7d186f5c-metrics-cert\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:25.014422 master-0 kubenswrapper[33013]: I0313 11:06:25.014059 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pck5\" (UniqueName: \"kubernetes.io/projected/b7fe3223-9556-404e-9a74-250e7d186f5c-kube-api-access-4pck5\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:25.014422 master-0 kubenswrapper[33013]: I0313 11:06:25.014178 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b7fe3223-9556-404e-9a74-250e7d186f5c-socket-dir\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:25.014792 master-0 kubenswrapper[33013]: I0313 11:06:25.014759 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b7fe3223-9556-404e-9a74-250e7d186f5c-socket-dir\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:25.017981 master-0 kubenswrapper[33013]: I0313 11:06:25.017955 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7fe3223-9556-404e-9a74-250e7d186f5c-apiservice-cert\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:25.019107 master-0 kubenswrapper[33013]: I0313 11:06:25.019083 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7fe3223-9556-404e-9a74-250e7d186f5c-webhook-cert\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:25.019534 master-0 kubenswrapper[33013]: I0313 11:06:25.019514 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7fe3223-9556-404e-9a74-250e7d186f5c-metrics-cert\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:25.069321 master-0 kubenswrapper[33013]: I0313 11:06:25.069263 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pck5\" (UniqueName: \"kubernetes.io/projected/b7fe3223-9556-404e-9a74-250e7d186f5c-kube-api-access-4pck5\") pod \"lvms-operator-7bf95d86d4-8x5w9\" (UID: \"b7fe3223-9556-404e-9a74-250e7d186f5c\") " pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:25.209465 master-0 kubenswrapper[33013]: I0313 11:06:25.209329 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:25.674084 master-0 kubenswrapper[33013]: I0313 11:06:25.674024 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7bf95d86d4-8x5w9"] Mar 13 11:06:25.678169 master-0 kubenswrapper[33013]: W0313 11:06:25.678118 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7fe3223_9556_404e_9a74_250e7d186f5c.slice/crio-30f050dd4406abf1ab9fe156171fd3f956a358f8703b5d65db3a6640dc499540 WatchSource:0}: Error finding container 30f050dd4406abf1ab9fe156171fd3f956a358f8703b5d65db3a6640dc499540: Status 404 returned error can't find the container with id 30f050dd4406abf1ab9fe156171fd3f956a358f8703b5d65db3a6640dc499540 Mar 13 11:06:26.303695 master-0 kubenswrapper[33013]: I0313 11:06:26.303629 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" event={"ID":"b7fe3223-9556-404e-9a74-250e7d186f5c","Type":"ContainerStarted","Data":"30f050dd4406abf1ab9fe156171fd3f956a358f8703b5d65db3a6640dc499540"} Mar 13 11:06:34.384036 master-0 kubenswrapper[33013]: I0313 11:06:34.383942 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" event={"ID":"b7fe3223-9556-404e-9a74-250e7d186f5c","Type":"ContainerStarted","Data":"173af2ae747de64d0874bdb47395db3a586005fb536ea418b7bcc239ed504485"} Mar 13 11:06:34.384714 master-0 kubenswrapper[33013]: I0313 11:06:34.384402 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:34.387930 master-0 kubenswrapper[33013]: I0313 11:06:34.387891 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" Mar 13 11:06:34.451057 master-0 kubenswrapper[33013]: I0313 11:06:34.450956 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-7bf95d86d4-8x5w9" podStartSLOduration=2.852206376 podStartE2EDuration="10.45092937s" podCreationTimestamp="2026-03-13 11:06:24 +0000 UTC" firstStartedPulling="2026-03-13 11:06:25.684126487 +0000 UTC m=+569.160079836" lastFinishedPulling="2026-03-13 11:06:33.282849441 +0000 UTC m=+576.758802830" observedRunningTime="2026-03-13 11:06:34.446438725 +0000 UTC m=+577.922392084" watchObservedRunningTime="2026-03-13 11:06:34.45092937 +0000 UTC m=+577.926882739" Mar 13 11:06:39.098998 master-0 kubenswrapper[33013]: I0313 11:06:39.098915 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22"] Mar 13 11:06:39.101461 master-0 kubenswrapper[33013]: I0313 11:06:39.101364 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" Mar 13 11:06:39.104084 master-0 kubenswrapper[33013]: I0313 11:06:39.104040 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vf5mq" Mar 13 11:06:39.128793 master-0 kubenswrapper[33013]: I0313 11:06:39.128731 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22"] Mar 13 11:06:39.269069 master-0 kubenswrapper[33013]: I0313 11:06:39.268991 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1ea1709-838c-4e89-899a-c5150c143ffd-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22\" (UID: \"c1ea1709-838c-4e89-899a-c5150c143ffd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" Mar 13 11:06:39.269069 master-0 kubenswrapper[33013]: I0313 11:06:39.269073 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9fnm\" (UniqueName: \"kubernetes.io/projected/c1ea1709-838c-4e89-899a-c5150c143ffd-kube-api-access-m9fnm\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22\" (UID: \"c1ea1709-838c-4e89-899a-c5150c143ffd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" Mar 13 11:06:39.269355 master-0 kubenswrapper[33013]: I0313 11:06:39.269185 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1ea1709-838c-4e89-899a-c5150c143ffd-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22\" (UID: \"c1ea1709-838c-4e89-899a-c5150c143ffd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" Mar 13 11:06:39.370463 master-0 kubenswrapper[33013]: I0313 11:06:39.370251 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1ea1709-838c-4e89-899a-c5150c143ffd-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22\" (UID: \"c1ea1709-838c-4e89-899a-c5150c143ffd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" Mar 13 11:06:39.370463 master-0 kubenswrapper[33013]: I0313 11:06:39.370394 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9fnm\" (UniqueName: \"kubernetes.io/projected/c1ea1709-838c-4e89-899a-c5150c143ffd-kube-api-access-m9fnm\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22\" (UID: \"c1ea1709-838c-4e89-899a-c5150c143ffd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" Mar 13 11:06:39.370773 master-0 kubenswrapper[33013]: I0313 11:06:39.370518 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1ea1709-838c-4e89-899a-c5150c143ffd-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22\" (UID: \"c1ea1709-838c-4e89-899a-c5150c143ffd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" Mar 13 11:06:39.371017 master-0 kubenswrapper[33013]: I0313 11:06:39.370970 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1ea1709-838c-4e89-899a-c5150c143ffd-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22\" (UID: \"c1ea1709-838c-4e89-899a-c5150c143ffd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" Mar 13 11:06:39.371017 master-0 kubenswrapper[33013]: I0313 11:06:39.370991 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1ea1709-838c-4e89-899a-c5150c143ffd-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22\" (UID: \"c1ea1709-838c-4e89-899a-c5150c143ffd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" Mar 13 11:06:39.387717 master-0 kubenswrapper[33013]: I0313 11:06:39.387648 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9fnm\" (UniqueName: \"kubernetes.io/projected/c1ea1709-838c-4e89-899a-c5150c143ffd-kube-api-access-m9fnm\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22\" (UID: \"c1ea1709-838c-4e89-899a-c5150c143ffd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" Mar 13 11:06:39.415695 master-0 kubenswrapper[33013]: I0313 11:06:39.415646 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" Mar 13 11:06:39.842313 master-0 kubenswrapper[33013]: I0313 11:06:39.842242 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22"] Mar 13 11:06:40.428605 master-0 kubenswrapper[33013]: I0313 11:06:40.428508 33013 generic.go:334] "Generic (PLEG): container finished" podID="c1ea1709-838c-4e89-899a-c5150c143ffd" containerID="37a926098a7e673eb99ff971bffc242e1540c0c275941f01bafdb9dc765eb16d" exitCode=0 Mar 13 11:06:40.429195 master-0 kubenswrapper[33013]: I0313 11:06:40.428631 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" event={"ID":"c1ea1709-838c-4e89-899a-c5150c143ffd","Type":"ContainerDied","Data":"37a926098a7e673eb99ff971bffc242e1540c0c275941f01bafdb9dc765eb16d"} Mar 13 11:06:40.429195 master-0 kubenswrapper[33013]: I0313 11:06:40.428695 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" event={"ID":"c1ea1709-838c-4e89-899a-c5150c143ffd","Type":"ContainerStarted","Data":"335ae022fdf6ea1c7f4628462be0138f4be26a8bc007c2dee343b32d96500c2f"} Mar 13 11:06:40.548337 master-0 kubenswrapper[33013]: I0313 11:06:40.548248 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t"] Mar 13 11:06:40.549892 master-0 kubenswrapper[33013]: I0313 11:06:40.549855 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" Mar 13 11:06:40.563011 master-0 kubenswrapper[33013]: I0313 11:06:40.562939 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t"] Mar 13 11:06:40.701329 master-0 kubenswrapper[33013]: I0313 11:06:40.701184 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9280a97a-fd2e-4875-9aa1-4fe70c210d31-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t\" (UID: \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" Mar 13 11:06:40.701329 master-0 kubenswrapper[33013]: I0313 11:06:40.701248 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9280a97a-fd2e-4875-9aa1-4fe70c210d31-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t\" (UID: \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" Mar 13 11:06:40.701329 master-0 kubenswrapper[33013]: I0313 11:06:40.701285 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdbvd\" (UniqueName: \"kubernetes.io/projected/9280a97a-fd2e-4875-9aa1-4fe70c210d31-kube-api-access-fdbvd\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t\" (UID: \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" Mar 13 11:06:40.804619 master-0 kubenswrapper[33013]: I0313 11:06:40.803497 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9280a97a-fd2e-4875-9aa1-4fe70c210d31-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t\" (UID: \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" Mar 13 11:06:40.804619 master-0 kubenswrapper[33013]: I0313 11:06:40.803566 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9280a97a-fd2e-4875-9aa1-4fe70c210d31-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t\" (UID: \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" Mar 13 11:06:40.804619 master-0 kubenswrapper[33013]: I0313 11:06:40.803641 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdbvd\" (UniqueName: \"kubernetes.io/projected/9280a97a-fd2e-4875-9aa1-4fe70c210d31-kube-api-access-fdbvd\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t\" (UID: \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" Mar 13 11:06:40.804619 master-0 kubenswrapper[33013]: I0313 11:06:40.804333 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9280a97a-fd2e-4875-9aa1-4fe70c210d31-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t\" (UID: \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" Mar 13 11:06:40.804938 master-0 kubenswrapper[33013]: I0313 11:06:40.804763 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9280a97a-fd2e-4875-9aa1-4fe70c210d31-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t\" (UID: \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" Mar 13 11:06:40.819560 master-0 kubenswrapper[33013]: I0313 11:06:40.819520 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdbvd\" (UniqueName: \"kubernetes.io/projected/9280a97a-fd2e-4875-9aa1-4fe70c210d31-kube-api-access-fdbvd\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t\" (UID: \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" Mar 13 11:06:40.865961 master-0 kubenswrapper[33013]: I0313 11:06:40.865840 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" Mar 13 11:06:41.270929 master-0 kubenswrapper[33013]: I0313 11:06:41.270800 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t"] Mar 13 11:06:41.275763 master-0 kubenswrapper[33013]: W0313 11:06:41.275692 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9280a97a_fd2e_4875_9aa1_4fe70c210d31.slice/crio-0e880afbca3eb560dced6c692576dcbcc888a79062d35fba8f1b3f3b4773408b WatchSource:0}: Error finding container 0e880afbca3eb560dced6c692576dcbcc888a79062d35fba8f1b3f3b4773408b: Status 404 returned error can't find the container with id 0e880afbca3eb560dced6c692576dcbcc888a79062d35fba8f1b3f3b4773408b Mar 13 11:06:41.355113 master-0 kubenswrapper[33013]: I0313 11:06:41.355069 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2"] Mar 13 11:06:41.359438 master-0 kubenswrapper[33013]: I0313 11:06:41.359403 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" Mar 13 11:06:41.369808 master-0 kubenswrapper[33013]: I0313 11:06:41.369749 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2"] Mar 13 11:06:41.414788 master-0 kubenswrapper[33013]: I0313 11:06:41.414712 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd15c84-e8e2-49d1-b85a-08863914f3f7-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2\" (UID: \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" Mar 13 11:06:41.414904 master-0 kubenswrapper[33013]: I0313 11:06:41.414829 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q68l2\" (UniqueName: \"kubernetes.io/projected/7bd15c84-e8e2-49d1-b85a-08863914f3f7-kube-api-access-q68l2\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2\" (UID: \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" Mar 13 11:06:41.414904 master-0 kubenswrapper[33013]: I0313 11:06:41.414857 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd15c84-e8e2-49d1-b85a-08863914f3f7-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2\" (UID: \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" Mar 13 11:06:41.437097 master-0 kubenswrapper[33013]: I0313 11:06:41.436966 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" event={"ID":"9280a97a-fd2e-4875-9aa1-4fe70c210d31","Type":"ContainerStarted","Data":"265f2e40bf3c5bc201143b92a522e1f80de4fb36e9a86806f43c1081d8c5c3a5"} Mar 13 11:06:41.437879 master-0 kubenswrapper[33013]: I0313 11:06:41.437816 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" event={"ID":"9280a97a-fd2e-4875-9aa1-4fe70c210d31","Type":"ContainerStarted","Data":"0e880afbca3eb560dced6c692576dcbcc888a79062d35fba8f1b3f3b4773408b"} Mar 13 11:06:41.516195 master-0 kubenswrapper[33013]: I0313 11:06:41.516149 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd15c84-e8e2-49d1-b85a-08863914f3f7-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2\" (UID: \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" Mar 13 11:06:41.516303 master-0 kubenswrapper[33013]: I0313 11:06:41.516234 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q68l2\" (UniqueName: \"kubernetes.io/projected/7bd15c84-e8e2-49d1-b85a-08863914f3f7-kube-api-access-q68l2\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2\" (UID: \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" Mar 13 11:06:41.516303 master-0 kubenswrapper[33013]: I0313 11:06:41.516268 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd15c84-e8e2-49d1-b85a-08863914f3f7-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2\" (UID: \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" Mar 13 11:06:41.516804 master-0 kubenswrapper[33013]: I0313 11:06:41.516787 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd15c84-e8e2-49d1-b85a-08863914f3f7-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2\" (UID: \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" Mar 13 11:06:41.517950 master-0 kubenswrapper[33013]: I0313 11:06:41.517914 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd15c84-e8e2-49d1-b85a-08863914f3f7-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2\" (UID: \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" Mar 13 11:06:41.533946 master-0 kubenswrapper[33013]: I0313 11:06:41.533799 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q68l2\" (UniqueName: \"kubernetes.io/projected/7bd15c84-e8e2-49d1-b85a-08863914f3f7-kube-api-access-q68l2\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2\" (UID: \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" Mar 13 11:06:41.717820 master-0 kubenswrapper[33013]: I0313 11:06:41.717757 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" Mar 13 11:06:42.168133 master-0 kubenswrapper[33013]: I0313 11:06:42.168078 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2"] Mar 13 11:06:42.172529 master-0 kubenswrapper[33013]: W0313 11:06:42.172484 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bd15c84_e8e2_49d1_b85a_08863914f3f7.slice/crio-73aa54f1c3baaf13657bbd11cfb4d55dfa5ae81e6975c157138126b621f99662 WatchSource:0}: Error finding container 73aa54f1c3baaf13657bbd11cfb4d55dfa5ae81e6975c157138126b621f99662: Status 404 returned error can't find the container with id 73aa54f1c3baaf13657bbd11cfb4d55dfa5ae81e6975c157138126b621f99662 Mar 13 11:06:42.449938 master-0 kubenswrapper[33013]: I0313 11:06:42.449754 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" event={"ID":"7bd15c84-e8e2-49d1-b85a-08863914f3f7","Type":"ContainerStarted","Data":"f4d09ab4b0575575d9091cddff7a5a55596b386718a31c12296060bee08ce82f"} Mar 13 11:06:42.449938 master-0 kubenswrapper[33013]: I0313 11:06:42.449838 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" event={"ID":"7bd15c84-e8e2-49d1-b85a-08863914f3f7","Type":"ContainerStarted","Data":"73aa54f1c3baaf13657bbd11cfb4d55dfa5ae81e6975c157138126b621f99662"} Mar 13 11:06:42.452091 master-0 kubenswrapper[33013]: I0313 11:06:42.452044 33013 generic.go:334] "Generic (PLEG): container finished" podID="9280a97a-fd2e-4875-9aa1-4fe70c210d31" containerID="265f2e40bf3c5bc201143b92a522e1f80de4fb36e9a86806f43c1081d8c5c3a5" exitCode=0 Mar 13 11:06:42.452166 master-0 kubenswrapper[33013]: I0313 11:06:42.452091 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" event={"ID":"9280a97a-fd2e-4875-9aa1-4fe70c210d31","Type":"ContainerDied","Data":"265f2e40bf3c5bc201143b92a522e1f80de4fb36e9a86806f43c1081d8c5c3a5"} Mar 13 11:06:43.469278 master-0 kubenswrapper[33013]: I0313 11:06:43.469197 33013 generic.go:334] "Generic (PLEG): container finished" podID="7bd15c84-e8e2-49d1-b85a-08863914f3f7" containerID="f4d09ab4b0575575d9091cddff7a5a55596b386718a31c12296060bee08ce82f" exitCode=0 Mar 13 11:06:43.469278 master-0 kubenswrapper[33013]: I0313 11:06:43.469246 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" event={"ID":"7bd15c84-e8e2-49d1-b85a-08863914f3f7","Type":"ContainerDied","Data":"f4d09ab4b0575575d9091cddff7a5a55596b386718a31c12296060bee08ce82f"} Mar 13 11:06:45.496659 master-0 kubenswrapper[33013]: I0313 11:06:45.496537 33013 generic.go:334] "Generic (PLEG): container finished" podID="9280a97a-fd2e-4875-9aa1-4fe70c210d31" containerID="e8e1111a936ece24c6d3a767ad4ebd8bb16cdb23884eef2b5f3b06da7e7bfc30" exitCode=0 Mar 13 11:06:45.497964 master-0 kubenswrapper[33013]: I0313 11:06:45.496656 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" event={"ID":"9280a97a-fd2e-4875-9aa1-4fe70c210d31","Type":"ContainerDied","Data":"e8e1111a936ece24c6d3a767ad4ebd8bb16cdb23884eef2b5f3b06da7e7bfc30"} Mar 13 11:06:45.501443 master-0 kubenswrapper[33013]: I0313 11:06:45.501397 33013 generic.go:334] "Generic (PLEG): container finished" podID="7bd15c84-e8e2-49d1-b85a-08863914f3f7" containerID="6f25ca37bfb9dd30b78f343080b32f1d0d95f0236862912e8db1a207f5ad2031" exitCode=0 Mar 13 11:06:45.501622 master-0 kubenswrapper[33013]: I0313 11:06:45.501489 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" event={"ID":"7bd15c84-e8e2-49d1-b85a-08863914f3f7","Type":"ContainerDied","Data":"6f25ca37bfb9dd30b78f343080b32f1d0d95f0236862912e8db1a207f5ad2031"} Mar 13 11:06:45.507229 master-0 kubenswrapper[33013]: I0313 11:06:45.507164 33013 generic.go:334] "Generic (PLEG): container finished" podID="c1ea1709-838c-4e89-899a-c5150c143ffd" containerID="2ca12f7881725dbe972cbee55f0a4553373192bc90042001cd3c01f18e6c7678" exitCode=0 Mar 13 11:06:45.507229 master-0 kubenswrapper[33013]: I0313 11:06:45.507221 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" event={"ID":"c1ea1709-838c-4e89-899a-c5150c143ffd","Type":"ContainerDied","Data":"2ca12f7881725dbe972cbee55f0a4553373192bc90042001cd3c01f18e6c7678"} Mar 13 11:06:46.520309 master-0 kubenswrapper[33013]: I0313 11:06:46.520080 33013 generic.go:334] "Generic (PLEG): container finished" podID="c1ea1709-838c-4e89-899a-c5150c143ffd" containerID="e3eac76a5b8b0705c6621034b960259560c53e3032c9f55df22c30c20b081da5" exitCode=0 Mar 13 11:06:46.521043 master-0 kubenswrapper[33013]: I0313 11:06:46.520213 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" event={"ID":"c1ea1709-838c-4e89-899a-c5150c143ffd","Type":"ContainerDied","Data":"e3eac76a5b8b0705c6621034b960259560c53e3032c9f55df22c30c20b081da5"} Mar 13 11:06:46.525109 master-0 kubenswrapper[33013]: I0313 11:06:46.525064 33013 generic.go:334] "Generic (PLEG): container finished" podID="9280a97a-fd2e-4875-9aa1-4fe70c210d31" containerID="42a6df1defa4916866e9df73a8e9492b1c9e9388b83e0ffb8a7e6d3bfee215ee" exitCode=0 Mar 13 11:06:46.525244 master-0 kubenswrapper[33013]: I0313 11:06:46.525142 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" event={"ID":"9280a97a-fd2e-4875-9aa1-4fe70c210d31","Type":"ContainerDied","Data":"42a6df1defa4916866e9df73a8e9492b1c9e9388b83e0ffb8a7e6d3bfee215ee"} Mar 13 11:06:46.529224 master-0 kubenswrapper[33013]: I0313 11:06:46.529164 33013 generic.go:334] "Generic (PLEG): container finished" podID="7bd15c84-e8e2-49d1-b85a-08863914f3f7" containerID="35297c7586b0e6b6e34b8672c6a42162a9f362e941e8fd5975e688ce739c639d" exitCode=0 Mar 13 11:06:46.529499 master-0 kubenswrapper[33013]: I0313 11:06:46.529374 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" event={"ID":"7bd15c84-e8e2-49d1-b85a-08863914f3f7","Type":"ContainerDied","Data":"35297c7586b0e6b6e34b8672c6a42162a9f362e941e8fd5975e688ce739c639d"} Mar 13 11:06:48.046421 master-0 kubenswrapper[33013]: I0313 11:06:48.045864 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" Mar 13 11:06:48.054642 master-0 kubenswrapper[33013]: I0313 11:06:48.052016 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" Mar 13 11:06:48.057053 master-0 kubenswrapper[33013]: I0313 11:06:48.056845 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" Mar 13 11:06:48.153405 master-0 kubenswrapper[33013]: I0313 11:06:48.153262 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdbvd\" (UniqueName: \"kubernetes.io/projected/9280a97a-fd2e-4875-9aa1-4fe70c210d31-kube-api-access-fdbvd\") pod \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\" (UID: \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\") " Mar 13 11:06:48.153405 master-0 kubenswrapper[33013]: I0313 11:06:48.153359 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1ea1709-838c-4e89-899a-c5150c143ffd-bundle\") pod \"c1ea1709-838c-4e89-899a-c5150c143ffd\" (UID: \"c1ea1709-838c-4e89-899a-c5150c143ffd\") " Mar 13 11:06:48.153706 master-0 kubenswrapper[33013]: I0313 11:06:48.153414 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd15c84-e8e2-49d1-b85a-08863914f3f7-util\") pod \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\" (UID: \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\") " Mar 13 11:06:48.153706 master-0 kubenswrapper[33013]: I0313 11:06:48.153446 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd15c84-e8e2-49d1-b85a-08863914f3f7-bundle\") pod \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\" (UID: \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\") " Mar 13 11:06:48.153706 master-0 kubenswrapper[33013]: I0313 11:06:48.153469 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q68l2\" (UniqueName: \"kubernetes.io/projected/7bd15c84-e8e2-49d1-b85a-08863914f3f7-kube-api-access-q68l2\") pod \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\" (UID: \"7bd15c84-e8e2-49d1-b85a-08863914f3f7\") " Mar 13 11:06:48.153706 master-0 kubenswrapper[33013]: I0313 11:06:48.153494 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9280a97a-fd2e-4875-9aa1-4fe70c210d31-bundle\") pod \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\" (UID: \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\") " Mar 13 11:06:48.153706 master-0 kubenswrapper[33013]: I0313 11:06:48.153533 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9280a97a-fd2e-4875-9aa1-4fe70c210d31-util\") pod \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\" (UID: \"9280a97a-fd2e-4875-9aa1-4fe70c210d31\") " Mar 13 11:06:48.153706 master-0 kubenswrapper[33013]: I0313 11:06:48.153570 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9fnm\" (UniqueName: \"kubernetes.io/projected/c1ea1709-838c-4e89-899a-c5150c143ffd-kube-api-access-m9fnm\") pod \"c1ea1709-838c-4e89-899a-c5150c143ffd\" (UID: \"c1ea1709-838c-4e89-899a-c5150c143ffd\") " Mar 13 11:06:48.153706 master-0 kubenswrapper[33013]: I0313 11:06:48.153636 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1ea1709-838c-4e89-899a-c5150c143ffd-util\") pod \"c1ea1709-838c-4e89-899a-c5150c143ffd\" (UID: \"c1ea1709-838c-4e89-899a-c5150c143ffd\") " Mar 13 11:06:48.159389 master-0 kubenswrapper[33013]: I0313 11:06:48.158718 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd15c84-e8e2-49d1-b85a-08863914f3f7-bundle" (OuterVolumeSpecName: "bundle") pod "7bd15c84-e8e2-49d1-b85a-08863914f3f7" (UID: "7bd15c84-e8e2-49d1-b85a-08863914f3f7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:06:48.159389 master-0 kubenswrapper[33013]: I0313 11:06:48.158989 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp"] Mar 13 11:06:48.159550 master-0 kubenswrapper[33013]: E0313 11:06:48.159428 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9280a97a-fd2e-4875-9aa1-4fe70c210d31" containerName="pull" Mar 13 11:06:48.159550 master-0 kubenswrapper[33013]: I0313 11:06:48.159447 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="9280a97a-fd2e-4875-9aa1-4fe70c210d31" containerName="pull" Mar 13 11:06:48.159550 master-0 kubenswrapper[33013]: E0313 11:06:48.159465 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd15c84-e8e2-49d1-b85a-08863914f3f7" containerName="util" Mar 13 11:06:48.159550 master-0 kubenswrapper[33013]: I0313 11:06:48.159472 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd15c84-e8e2-49d1-b85a-08863914f3f7" containerName="util" Mar 13 11:06:48.159550 master-0 kubenswrapper[33013]: E0313 11:06:48.159510 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ea1709-838c-4e89-899a-c5150c143ffd" containerName="extract" Mar 13 11:06:48.159550 master-0 kubenswrapper[33013]: I0313 11:06:48.159519 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ea1709-838c-4e89-899a-c5150c143ffd" containerName="extract" Mar 13 11:06:48.159550 master-0 kubenswrapper[33013]: E0313 11:06:48.159531 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd15c84-e8e2-49d1-b85a-08863914f3f7" containerName="pull" Mar 13 11:06:48.159550 master-0 kubenswrapper[33013]: I0313 11:06:48.159536 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd15c84-e8e2-49d1-b85a-08863914f3f7" containerName="pull" Mar 13 11:06:48.159550 master-0 kubenswrapper[33013]: E0313 11:06:48.159543 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ea1709-838c-4e89-899a-c5150c143ffd" containerName="pull" Mar 13 11:06:48.159550 master-0 kubenswrapper[33013]: I0313 11:06:48.159549 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ea1709-838c-4e89-899a-c5150c143ffd" containerName="pull" Mar 13 11:06:48.159887 master-0 kubenswrapper[33013]: E0313 11:06:48.159566 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9280a97a-fd2e-4875-9aa1-4fe70c210d31" containerName="util" Mar 13 11:06:48.159887 master-0 kubenswrapper[33013]: I0313 11:06:48.159572 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="9280a97a-fd2e-4875-9aa1-4fe70c210d31" containerName="util" Mar 13 11:06:48.159887 master-0 kubenswrapper[33013]: E0313 11:06:48.159598 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ea1709-838c-4e89-899a-c5150c143ffd" containerName="util" Mar 13 11:06:48.159887 master-0 kubenswrapper[33013]: I0313 11:06:48.159605 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ea1709-838c-4e89-899a-c5150c143ffd" containerName="util" Mar 13 11:06:48.159887 master-0 kubenswrapper[33013]: E0313 11:06:48.159618 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9280a97a-fd2e-4875-9aa1-4fe70c210d31" containerName="extract" Mar 13 11:06:48.159887 master-0 kubenswrapper[33013]: I0313 11:06:48.159626 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="9280a97a-fd2e-4875-9aa1-4fe70c210d31" containerName="extract" Mar 13 11:06:48.159887 master-0 kubenswrapper[33013]: E0313 11:06:48.159644 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd15c84-e8e2-49d1-b85a-08863914f3f7" containerName="extract" Mar 13 11:06:48.159887 master-0 kubenswrapper[33013]: I0313 11:06:48.159652 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd15c84-e8e2-49d1-b85a-08863914f3f7" containerName="extract" Mar 13 11:06:48.159887 master-0 kubenswrapper[33013]: I0313 11:06:48.159814 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ea1709-838c-4e89-899a-c5150c143ffd" containerName="extract" Mar 13 11:06:48.159887 master-0 kubenswrapper[33013]: I0313 11:06:48.159828 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="9280a97a-fd2e-4875-9aa1-4fe70c210d31" containerName="extract" Mar 13 11:06:48.159887 master-0 kubenswrapper[33013]: I0313 11:06:48.159841 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd15c84-e8e2-49d1-b85a-08863914f3f7" containerName="extract" Mar 13 11:06:48.160968 master-0 kubenswrapper[33013]: I0313 11:06:48.160868 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" Mar 13 11:06:48.161651 master-0 kubenswrapper[33013]: I0313 11:06:48.161519 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9280a97a-fd2e-4875-9aa1-4fe70c210d31-bundle" (OuterVolumeSpecName: "bundle") pod "9280a97a-fd2e-4875-9aa1-4fe70c210d31" (UID: "9280a97a-fd2e-4875-9aa1-4fe70c210d31"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:06:48.162383 master-0 kubenswrapper[33013]: I0313 11:06:48.162311 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1ea1709-838c-4e89-899a-c5150c143ffd-bundle" (OuterVolumeSpecName: "bundle") pod "c1ea1709-838c-4e89-899a-c5150c143ffd" (UID: "c1ea1709-838c-4e89-899a-c5150c143ffd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:06:48.165060 master-0 kubenswrapper[33013]: I0313 11:06:48.164885 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd15c84-e8e2-49d1-b85a-08863914f3f7-kube-api-access-q68l2" (OuterVolumeSpecName: "kube-api-access-q68l2") pod "7bd15c84-e8e2-49d1-b85a-08863914f3f7" (UID: "7bd15c84-e8e2-49d1-b85a-08863914f3f7"). InnerVolumeSpecName "kube-api-access-q68l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:06:48.170836 master-0 kubenswrapper[33013]: I0313 11:06:48.170773 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd15c84-e8e2-49d1-b85a-08863914f3f7-util" (OuterVolumeSpecName: "util") pod "7bd15c84-e8e2-49d1-b85a-08863914f3f7" (UID: "7bd15c84-e8e2-49d1-b85a-08863914f3f7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:06:48.173559 master-0 kubenswrapper[33013]: I0313 11:06:48.173511 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1ea1709-838c-4e89-899a-c5150c143ffd-kube-api-access-m9fnm" (OuterVolumeSpecName: "kube-api-access-m9fnm") pod "c1ea1709-838c-4e89-899a-c5150c143ffd" (UID: "c1ea1709-838c-4e89-899a-c5150c143ffd"). InnerVolumeSpecName "kube-api-access-m9fnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:06:48.173898 master-0 kubenswrapper[33013]: I0313 11:06:48.173846 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp"] Mar 13 11:06:48.174699 master-0 kubenswrapper[33013]: I0313 11:06:48.174660 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1ea1709-838c-4e89-899a-c5150c143ffd-util" (OuterVolumeSpecName: "util") pod "c1ea1709-838c-4e89-899a-c5150c143ffd" (UID: "c1ea1709-838c-4e89-899a-c5150c143ffd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:06:48.174942 master-0 kubenswrapper[33013]: I0313 11:06:48.174910 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9280a97a-fd2e-4875-9aa1-4fe70c210d31-kube-api-access-fdbvd" (OuterVolumeSpecName: "kube-api-access-fdbvd") pod "9280a97a-fd2e-4875-9aa1-4fe70c210d31" (UID: "9280a97a-fd2e-4875-9aa1-4fe70c210d31"). InnerVolumeSpecName "kube-api-access-fdbvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:06:48.176422 master-0 kubenswrapper[33013]: I0313 11:06:48.176217 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9280a97a-fd2e-4875-9aa1-4fe70c210d31-util" (OuterVolumeSpecName: "util") pod "9280a97a-fd2e-4875-9aa1-4fe70c210d31" (UID: "9280a97a-fd2e-4875-9aa1-4fe70c210d31"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:06:48.255193 master-0 kubenswrapper[33013]: I0313 11:06:48.255093 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6cbaf724-597d-47ba-9a99-8ca5a8225945-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp\" (UID: \"6cbaf724-597d-47ba-9a99-8ca5a8225945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" Mar 13 11:06:48.255193 master-0 kubenswrapper[33013]: I0313 11:06:48.255199 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6cbaf724-597d-47ba-9a99-8ca5a8225945-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp\" (UID: \"6cbaf724-597d-47ba-9a99-8ca5a8225945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" Mar 13 11:06:48.255530 master-0 kubenswrapper[33013]: I0313 11:06:48.255274 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txgg8\" (UniqueName: \"kubernetes.io/projected/6cbaf724-597d-47ba-9a99-8ca5a8225945-kube-api-access-txgg8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp\" (UID: \"6cbaf724-597d-47ba-9a99-8ca5a8225945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" Mar 13 11:06:48.255530 master-0 kubenswrapper[33013]: I0313 11:06:48.255358 33013 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1ea1709-838c-4e89-899a-c5150c143ffd-util\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:48.255530 master-0 kubenswrapper[33013]: I0313 11:06:48.255376 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdbvd\" (UniqueName: \"kubernetes.io/projected/9280a97a-fd2e-4875-9aa1-4fe70c210d31-kube-api-access-fdbvd\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:48.255530 master-0 kubenswrapper[33013]: I0313 11:06:48.255395 33013 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1ea1709-838c-4e89-899a-c5150c143ffd-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:48.255530 master-0 kubenswrapper[33013]: I0313 11:06:48.255407 33013 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7bd15c84-e8e2-49d1-b85a-08863914f3f7-util\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:48.255530 master-0 kubenswrapper[33013]: I0313 11:06:48.255417 33013 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7bd15c84-e8e2-49d1-b85a-08863914f3f7-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:48.255530 master-0 kubenswrapper[33013]: I0313 11:06:48.255428 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q68l2\" (UniqueName: \"kubernetes.io/projected/7bd15c84-e8e2-49d1-b85a-08863914f3f7-kube-api-access-q68l2\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:48.255530 master-0 kubenswrapper[33013]: I0313 11:06:48.255439 33013 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9280a97a-fd2e-4875-9aa1-4fe70c210d31-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:48.255530 master-0 kubenswrapper[33013]: I0313 11:06:48.255450 33013 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9280a97a-fd2e-4875-9aa1-4fe70c210d31-util\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:48.255530 master-0 kubenswrapper[33013]: I0313 11:06:48.255462 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9fnm\" (UniqueName: \"kubernetes.io/projected/c1ea1709-838c-4e89-899a-c5150c143ffd-kube-api-access-m9fnm\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:48.356662 master-0 kubenswrapper[33013]: I0313 11:06:48.356559 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6cbaf724-597d-47ba-9a99-8ca5a8225945-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp\" (UID: \"6cbaf724-597d-47ba-9a99-8ca5a8225945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" Mar 13 11:06:48.356928 master-0 kubenswrapper[33013]: I0313 11:06:48.356688 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6cbaf724-597d-47ba-9a99-8ca5a8225945-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp\" (UID: \"6cbaf724-597d-47ba-9a99-8ca5a8225945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" Mar 13 11:06:48.356928 master-0 kubenswrapper[33013]: I0313 11:06:48.356787 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txgg8\" (UniqueName: \"kubernetes.io/projected/6cbaf724-597d-47ba-9a99-8ca5a8225945-kube-api-access-txgg8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp\" (UID: \"6cbaf724-597d-47ba-9a99-8ca5a8225945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" Mar 13 11:06:48.357504 master-0 kubenswrapper[33013]: I0313 11:06:48.357464 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6cbaf724-597d-47ba-9a99-8ca5a8225945-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp\" (UID: \"6cbaf724-597d-47ba-9a99-8ca5a8225945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" Mar 13 11:06:48.357671 master-0 kubenswrapper[33013]: I0313 11:06:48.357545 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6cbaf724-597d-47ba-9a99-8ca5a8225945-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp\" (UID: \"6cbaf724-597d-47ba-9a99-8ca5a8225945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" Mar 13 11:06:48.373852 master-0 kubenswrapper[33013]: I0313 11:06:48.373785 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txgg8\" (UniqueName: \"kubernetes.io/projected/6cbaf724-597d-47ba-9a99-8ca5a8225945-kube-api-access-txgg8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp\" (UID: \"6cbaf724-597d-47ba-9a99-8ca5a8225945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" Mar 13 11:06:48.501468 master-0 kubenswrapper[33013]: I0313 11:06:48.501251 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" Mar 13 11:06:48.551477 master-0 kubenswrapper[33013]: I0313 11:06:48.551382 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" event={"ID":"c1ea1709-838c-4e89-899a-c5150c143ffd","Type":"ContainerDied","Data":"335ae022fdf6ea1c7f4628462be0138f4be26a8bc007c2dee343b32d96500c2f"} Mar 13 11:06:48.551477 master-0 kubenswrapper[33013]: I0313 11:06:48.551457 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="335ae022fdf6ea1c7f4628462be0138f4be26a8bc007c2dee343b32d96500c2f" Mar 13 11:06:48.551477 master-0 kubenswrapper[33013]: I0313 11:06:48.551421 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57lt22" Mar 13 11:06:48.554076 master-0 kubenswrapper[33013]: I0313 11:06:48.553997 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" event={"ID":"9280a97a-fd2e-4875-9aa1-4fe70c210d31","Type":"ContainerDied","Data":"0e880afbca3eb560dced6c692576dcbcc888a79062d35fba8f1b3f3b4773408b"} Mar 13 11:06:48.554076 master-0 kubenswrapper[33013]: I0313 11:06:48.554062 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e880afbca3eb560dced6c692576dcbcc888a79062d35fba8f1b3f3b4773408b" Mar 13 11:06:48.554076 master-0 kubenswrapper[33013]: I0313 11:06:48.554022 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874w5d8t" Mar 13 11:06:48.557339 master-0 kubenswrapper[33013]: I0313 11:06:48.557269 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" event={"ID":"7bd15c84-e8e2-49d1-b85a-08863914f3f7","Type":"ContainerDied","Data":"73aa54f1c3baaf13657bbd11cfb4d55dfa5ae81e6975c157138126b621f99662"} Mar 13 11:06:48.557339 master-0 kubenswrapper[33013]: I0313 11:06:48.557314 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73aa54f1c3baaf13657bbd11cfb4d55dfa5ae81e6975c157138126b621f99662" Mar 13 11:06:48.557565 master-0 kubenswrapper[33013]: I0313 11:06:48.557373 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1766z2" Mar 13 11:06:48.964679 master-0 kubenswrapper[33013]: I0313 11:06:48.964607 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp"] Mar 13 11:06:48.970472 master-0 kubenswrapper[33013]: W0313 11:06:48.970423 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cbaf724_597d_47ba_9a99_8ca5a8225945.slice/crio-4173202c009398096076ad487f943931848729fd60c018c4563cd94dec819109 WatchSource:0}: Error finding container 4173202c009398096076ad487f943931848729fd60c018c4563cd94dec819109: Status 404 returned error can't find the container with id 4173202c009398096076ad487f943931848729fd60c018c4563cd94dec819109 Mar 13 11:06:49.567522 master-0 kubenswrapper[33013]: I0313 11:06:49.567452 33013 generic.go:334] "Generic (PLEG): container finished" podID="6cbaf724-597d-47ba-9a99-8ca5a8225945" containerID="4fe4cb5a3cc1551a6abbf29c772d9b89c377220d8e8be36afe6b216359b93059" exitCode=0 Mar 13 11:06:49.567522 master-0 kubenswrapper[33013]: I0313 11:06:49.567517 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" event={"ID":"6cbaf724-597d-47ba-9a99-8ca5a8225945","Type":"ContainerDied","Data":"4fe4cb5a3cc1551a6abbf29c772d9b89c377220d8e8be36afe6b216359b93059"} Mar 13 11:06:49.568267 master-0 kubenswrapper[33013]: I0313 11:06:49.567556 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" event={"ID":"6cbaf724-597d-47ba-9a99-8ca5a8225945","Type":"ContainerStarted","Data":"4173202c009398096076ad487f943931848729fd60c018c4563cd94dec819109"} Mar 13 11:06:51.583395 master-0 kubenswrapper[33013]: I0313 11:06:51.583321 33013 generic.go:334] "Generic (PLEG): container finished" podID="6cbaf724-597d-47ba-9a99-8ca5a8225945" containerID="ebd6f88c068863e705c582d532fa7e9b0fea3d7a66d551346d845c2a4ee578f3" exitCode=0 Mar 13 11:06:51.583395 master-0 kubenswrapper[33013]: I0313 11:06:51.583374 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" event={"ID":"6cbaf724-597d-47ba-9a99-8ca5a8225945","Type":"ContainerDied","Data":"ebd6f88c068863e705c582d532fa7e9b0fea3d7a66d551346d845c2a4ee578f3"} Mar 13 11:06:52.593002 master-0 kubenswrapper[33013]: I0313 11:06:52.592932 33013 generic.go:334] "Generic (PLEG): container finished" podID="6cbaf724-597d-47ba-9a99-8ca5a8225945" containerID="702eb3f5d7b2d77ef46423718175fef1101cc87976309ae8bcf8d7e64f081f3e" exitCode=0 Mar 13 11:06:52.593002 master-0 kubenswrapper[33013]: I0313 11:06:52.592990 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" event={"ID":"6cbaf724-597d-47ba-9a99-8ca5a8225945","Type":"ContainerDied","Data":"702eb3f5d7b2d77ef46423718175fef1101cc87976309ae8bcf8d7e64f081f3e"} Mar 13 11:06:52.843271 master-0 kubenswrapper[33013]: I0313 11:06:52.843088 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-th6w5"] Mar 13 11:06:52.844136 master-0 kubenswrapper[33013]: I0313 11:06:52.844097 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-th6w5" Mar 13 11:06:52.846183 master-0 kubenswrapper[33013]: I0313 11:06:52.846150 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 13 11:06:52.847281 master-0 kubenswrapper[33013]: I0313 11:06:52.847258 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 13 11:06:52.868641 master-0 kubenswrapper[33013]: I0313 11:06:52.868529 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-th6w5"] Mar 13 11:06:52.946210 master-0 kubenswrapper[33013]: I0313 11:06:52.946129 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdff1144-4004-4871-a8cc-e9f065aa8373-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-th6w5\" (UID: \"bdff1144-4004-4871-a8cc-e9f065aa8373\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-th6w5" Mar 13 11:06:52.946474 master-0 kubenswrapper[33013]: I0313 11:06:52.946286 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd9ll\" (UniqueName: \"kubernetes.io/projected/bdff1144-4004-4871-a8cc-e9f065aa8373-kube-api-access-kd9ll\") pod \"cert-manager-operator-controller-manager-66c8bdd694-th6w5\" (UID: \"bdff1144-4004-4871-a8cc-e9f065aa8373\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-th6w5" Mar 13 11:06:53.047437 master-0 kubenswrapper[33013]: I0313 11:06:53.047362 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd9ll\" (UniqueName: \"kubernetes.io/projected/bdff1144-4004-4871-a8cc-e9f065aa8373-kube-api-access-kd9ll\") pod \"cert-manager-operator-controller-manager-66c8bdd694-th6w5\" (UID: \"bdff1144-4004-4871-a8cc-e9f065aa8373\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-th6w5" Mar 13 11:06:53.047752 master-0 kubenswrapper[33013]: I0313 11:06:53.047504 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdff1144-4004-4871-a8cc-e9f065aa8373-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-th6w5\" (UID: \"bdff1144-4004-4871-a8cc-e9f065aa8373\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-th6w5" Mar 13 11:06:53.048159 master-0 kubenswrapper[33013]: I0313 11:06:53.048112 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdff1144-4004-4871-a8cc-e9f065aa8373-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-th6w5\" (UID: \"bdff1144-4004-4871-a8cc-e9f065aa8373\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-th6w5" Mar 13 11:06:53.064771 master-0 kubenswrapper[33013]: I0313 11:06:53.064726 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd9ll\" (UniqueName: \"kubernetes.io/projected/bdff1144-4004-4871-a8cc-e9f065aa8373-kube-api-access-kd9ll\") pod \"cert-manager-operator-controller-manager-66c8bdd694-th6w5\" (UID: \"bdff1144-4004-4871-a8cc-e9f065aa8373\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-th6w5" Mar 13 11:06:53.162485 master-0 kubenswrapper[33013]: I0313 11:06:53.162327 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-th6w5" Mar 13 11:06:53.612210 master-0 kubenswrapper[33013]: I0313 11:06:53.612134 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-th6w5"] Mar 13 11:06:53.617960 master-0 kubenswrapper[33013]: W0313 11:06:53.617881 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdff1144_4004_4871_a8cc_e9f065aa8373.slice/crio-6c7818835e46c629e9759df4b67929aec6f5094204663e3580cd3eb16e04d927 WatchSource:0}: Error finding container 6c7818835e46c629e9759df4b67929aec6f5094204663e3580cd3eb16e04d927: Status 404 returned error can't find the container with id 6c7818835e46c629e9759df4b67929aec6f5094204663e3580cd3eb16e04d927 Mar 13 11:06:53.962400 master-0 kubenswrapper[33013]: I0313 11:06:53.962344 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" Mar 13 11:06:53.965905 master-0 kubenswrapper[33013]: I0313 11:06:53.965858 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txgg8\" (UniqueName: \"kubernetes.io/projected/6cbaf724-597d-47ba-9a99-8ca5a8225945-kube-api-access-txgg8\") pod \"6cbaf724-597d-47ba-9a99-8ca5a8225945\" (UID: \"6cbaf724-597d-47ba-9a99-8ca5a8225945\") " Mar 13 11:06:53.966020 master-0 kubenswrapper[33013]: I0313 11:06:53.965954 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6cbaf724-597d-47ba-9a99-8ca5a8225945-util\") pod \"6cbaf724-597d-47ba-9a99-8ca5a8225945\" (UID: \"6cbaf724-597d-47ba-9a99-8ca5a8225945\") " Mar 13 11:06:53.966078 master-0 kubenswrapper[33013]: I0313 11:06:53.966023 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6cbaf724-597d-47ba-9a99-8ca5a8225945-bundle\") pod \"6cbaf724-597d-47ba-9a99-8ca5a8225945\" (UID: \"6cbaf724-597d-47ba-9a99-8ca5a8225945\") " Mar 13 11:06:53.968556 master-0 kubenswrapper[33013]: I0313 11:06:53.968500 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cbaf724-597d-47ba-9a99-8ca5a8225945-bundle" (OuterVolumeSpecName: "bundle") pod "6cbaf724-597d-47ba-9a99-8ca5a8225945" (UID: "6cbaf724-597d-47ba-9a99-8ca5a8225945"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:06:53.970003 master-0 kubenswrapper[33013]: I0313 11:06:53.968994 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cbaf724-597d-47ba-9a99-8ca5a8225945-kube-api-access-txgg8" (OuterVolumeSpecName: "kube-api-access-txgg8") pod "6cbaf724-597d-47ba-9a99-8ca5a8225945" (UID: "6cbaf724-597d-47ba-9a99-8ca5a8225945"). InnerVolumeSpecName "kube-api-access-txgg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:06:53.981371 master-0 kubenswrapper[33013]: I0313 11:06:53.981304 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cbaf724-597d-47ba-9a99-8ca5a8225945-util" (OuterVolumeSpecName: "util") pod "6cbaf724-597d-47ba-9a99-8ca5a8225945" (UID: "6cbaf724-597d-47ba-9a99-8ca5a8225945"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:06:54.067991 master-0 kubenswrapper[33013]: I0313 11:06:54.067498 33013 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6cbaf724-597d-47ba-9a99-8ca5a8225945-util\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:54.067991 master-0 kubenswrapper[33013]: I0313 11:06:54.067565 33013 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6cbaf724-597d-47ba-9a99-8ca5a8225945-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:54.067991 master-0 kubenswrapper[33013]: I0313 11:06:54.067600 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txgg8\" (UniqueName: \"kubernetes.io/projected/6cbaf724-597d-47ba-9a99-8ca5a8225945-kube-api-access-txgg8\") on node \"master-0\" DevicePath \"\"" Mar 13 11:06:54.623366 master-0 kubenswrapper[33013]: I0313 11:06:54.623249 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" event={"ID":"6cbaf724-597d-47ba-9a99-8ca5a8225945","Type":"ContainerDied","Data":"4173202c009398096076ad487f943931848729fd60c018c4563cd94dec819109"} Mar 13 11:06:54.623366 master-0 kubenswrapper[33013]: I0313 11:06:54.623329 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4173202c009398096076ad487f943931848729fd60c018c4563cd94dec819109" Mar 13 11:06:54.626646 master-0 kubenswrapper[33013]: I0313 11:06:54.624202 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbphp" Mar 13 11:06:54.631399 master-0 kubenswrapper[33013]: I0313 11:06:54.631329 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-th6w5" event={"ID":"bdff1144-4004-4871-a8cc-e9f065aa8373","Type":"ContainerStarted","Data":"6c7818835e46c629e9759df4b67929aec6f5094204663e3580cd3eb16e04d927"} Mar 13 11:06:56.329017 master-0 kubenswrapper[33013]: I0313 11:06:56.326542 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-x9krd"] Mar 13 11:06:56.329017 master-0 kubenswrapper[33013]: E0313 11:06:56.327326 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cbaf724-597d-47ba-9a99-8ca5a8225945" containerName="extract" Mar 13 11:06:56.329017 master-0 kubenswrapper[33013]: I0313 11:06:56.327357 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cbaf724-597d-47ba-9a99-8ca5a8225945" containerName="extract" Mar 13 11:06:56.329017 master-0 kubenswrapper[33013]: E0313 11:06:56.327371 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cbaf724-597d-47ba-9a99-8ca5a8225945" containerName="pull" Mar 13 11:06:56.329017 master-0 kubenswrapper[33013]: I0313 11:06:56.327377 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cbaf724-597d-47ba-9a99-8ca5a8225945" containerName="pull" Mar 13 11:06:56.329017 master-0 kubenswrapper[33013]: E0313 11:06:56.327401 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cbaf724-597d-47ba-9a99-8ca5a8225945" containerName="util" Mar 13 11:06:56.329017 master-0 kubenswrapper[33013]: I0313 11:06:56.327409 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cbaf724-597d-47ba-9a99-8ca5a8225945" containerName="util" Mar 13 11:06:56.329017 master-0 kubenswrapper[33013]: I0313 11:06:56.327655 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cbaf724-597d-47ba-9a99-8ca5a8225945" containerName="extract" Mar 13 11:06:56.329017 master-0 kubenswrapper[33013]: I0313 11:06:56.328114 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-x9krd" Mar 13 11:06:56.332965 master-0 kubenswrapper[33013]: I0313 11:06:56.332917 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 13 11:06:56.333214 master-0 kubenswrapper[33013]: I0313 11:06:56.333181 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 13 11:06:56.349245 master-0 kubenswrapper[33013]: I0313 11:06:56.349186 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-x9krd"] Mar 13 11:06:56.410751 master-0 kubenswrapper[33013]: I0313 11:06:56.410689 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdb8j\" (UniqueName: \"kubernetes.io/projected/2823557f-3c73-4e93-b996-fcdf5f9d6e60-kube-api-access-zdb8j\") pod \"nmstate-operator-796d4cfff4-x9krd\" (UID: \"2823557f-3c73-4e93-b996-fcdf5f9d6e60\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-x9krd" Mar 13 11:06:56.515564 master-0 kubenswrapper[33013]: I0313 11:06:56.515493 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdb8j\" (UniqueName: \"kubernetes.io/projected/2823557f-3c73-4e93-b996-fcdf5f9d6e60-kube-api-access-zdb8j\") pod \"nmstate-operator-796d4cfff4-x9krd\" (UID: \"2823557f-3c73-4e93-b996-fcdf5f9d6e60\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-x9krd" Mar 13 11:06:56.556529 master-0 kubenswrapper[33013]: I0313 11:06:56.556461 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdb8j\" (UniqueName: \"kubernetes.io/projected/2823557f-3c73-4e93-b996-fcdf5f9d6e60-kube-api-access-zdb8j\") pod \"nmstate-operator-796d4cfff4-x9krd\" (UID: \"2823557f-3c73-4e93-b996-fcdf5f9d6e60\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-x9krd" Mar 13 11:06:56.699999 master-0 kubenswrapper[33013]: I0313 11:06:56.699839 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-x9krd" Mar 13 11:06:57.151756 master-0 kubenswrapper[33013]: I0313 11:06:57.151703 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-x9krd"] Mar 13 11:06:57.156149 master-0 kubenswrapper[33013]: W0313 11:06:57.156117 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2823557f_3c73_4e93_b996_fcdf5f9d6e60.slice/crio-09e837d4adf5843e519c55ab3616b50f5b0d7de0bd65f391768ce3c537739fb3 WatchSource:0}: Error finding container 09e837d4adf5843e519c55ab3616b50f5b0d7de0bd65f391768ce3c537739fb3: Status 404 returned error can't find the container with id 09e837d4adf5843e519c55ab3616b50f5b0d7de0bd65f391768ce3c537739fb3 Mar 13 11:06:57.668073 master-0 kubenswrapper[33013]: I0313 11:06:57.668009 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-x9krd" event={"ID":"2823557f-3c73-4e93-b996-fcdf5f9d6e60","Type":"ContainerStarted","Data":"09e837d4adf5843e519c55ab3616b50f5b0d7de0bd65f391768ce3c537739fb3"} Mar 13 11:06:57.669864 master-0 kubenswrapper[33013]: I0313 11:06:57.669822 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-th6w5" event={"ID":"bdff1144-4004-4871-a8cc-e9f065aa8373","Type":"ContainerStarted","Data":"b9440a679c835bc6f62e9197018adc2cbf086a74e9b3663339240febecc1da0d"} Mar 13 11:06:57.700311 master-0 kubenswrapper[33013]: I0313 11:06:57.700216 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-th6w5" podStartSLOduration=2.539642896 podStartE2EDuration="5.700188965s" podCreationTimestamp="2026-03-13 11:06:52 +0000 UTC" firstStartedPulling="2026-03-13 11:06:53.622311603 +0000 UTC m=+597.098264972" lastFinishedPulling="2026-03-13 11:06:56.782857692 +0000 UTC m=+600.258811041" observedRunningTime="2026-03-13 11:06:57.695461173 +0000 UTC m=+601.171414542" watchObservedRunningTime="2026-03-13 11:06:57.700188965 +0000 UTC m=+601.176142314" Mar 13 11:06:59.636202 master-0 kubenswrapper[33013]: I0313 11:06:59.636139 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-tzmpm"] Mar 13 11:06:59.637423 master-0 kubenswrapper[33013]: I0313 11:06:59.637394 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-tzmpm" Mar 13 11:06:59.641932 master-0 kubenswrapper[33013]: I0313 11:06:59.641877 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 13 11:06:59.641932 master-0 kubenswrapper[33013]: I0313 11:06:59.641904 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 13 11:06:59.660558 master-0 kubenswrapper[33013]: I0313 11:06:59.659844 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-tzmpm"] Mar 13 11:06:59.670419 master-0 kubenswrapper[33013]: I0313 11:06:59.670229 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16033af4-14d4-4784-8c43-36b23c4d0b56-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-tzmpm\" (UID: \"16033af4-14d4-4784-8c43-36b23c4d0b56\") " pod="cert-manager/cert-manager-webhook-6888856db4-tzmpm" Mar 13 11:06:59.670419 master-0 kubenswrapper[33013]: I0313 11:06:59.670347 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7xvd\" (UniqueName: \"kubernetes.io/projected/16033af4-14d4-4784-8c43-36b23c4d0b56-kube-api-access-l7xvd\") pod \"cert-manager-webhook-6888856db4-tzmpm\" (UID: \"16033af4-14d4-4784-8c43-36b23c4d0b56\") " pod="cert-manager/cert-manager-webhook-6888856db4-tzmpm" Mar 13 11:06:59.774933 master-0 kubenswrapper[33013]: I0313 11:06:59.773460 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7xvd\" (UniqueName: \"kubernetes.io/projected/16033af4-14d4-4784-8c43-36b23c4d0b56-kube-api-access-l7xvd\") pod \"cert-manager-webhook-6888856db4-tzmpm\" (UID: \"16033af4-14d4-4784-8c43-36b23c4d0b56\") " pod="cert-manager/cert-manager-webhook-6888856db4-tzmpm" Mar 13 11:06:59.774933 master-0 kubenswrapper[33013]: I0313 11:06:59.773626 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16033af4-14d4-4784-8c43-36b23c4d0b56-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-tzmpm\" (UID: \"16033af4-14d4-4784-8c43-36b23c4d0b56\") " pod="cert-manager/cert-manager-webhook-6888856db4-tzmpm" Mar 13 11:06:59.798086 master-0 kubenswrapper[33013]: I0313 11:06:59.795500 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7xvd\" (UniqueName: \"kubernetes.io/projected/16033af4-14d4-4784-8c43-36b23c4d0b56-kube-api-access-l7xvd\") pod \"cert-manager-webhook-6888856db4-tzmpm\" (UID: \"16033af4-14d4-4784-8c43-36b23c4d0b56\") " pod="cert-manager/cert-manager-webhook-6888856db4-tzmpm" Mar 13 11:06:59.798484 master-0 kubenswrapper[33013]: I0313 11:06:59.798429 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16033af4-14d4-4784-8c43-36b23c4d0b56-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-tzmpm\" (UID: \"16033af4-14d4-4784-8c43-36b23c4d0b56\") " pod="cert-manager/cert-manager-webhook-6888856db4-tzmpm" Mar 13 11:07:00.001498 master-0 kubenswrapper[33013]: I0313 11:07:00.001357 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-tzmpm" Mar 13 11:07:01.698181 master-0 kubenswrapper[33013]: I0313 11:07:01.698020 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-tzmpm"] Mar 13 11:07:01.787641 master-0 kubenswrapper[33013]: I0313 11:07:01.782880 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-tzmpm" event={"ID":"16033af4-14d4-4784-8c43-36b23c4d0b56","Type":"ContainerStarted","Data":"8285ebe5305be06b291889224c2ed975582c9b7f56564de41a6b51925c17d62b"} Mar 13 11:07:01.787641 master-0 kubenswrapper[33013]: I0313 11:07:01.785491 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-x9krd" event={"ID":"2823557f-3c73-4e93-b996-fcdf5f9d6e60","Type":"ContainerStarted","Data":"427b4472101edb24689633d3182f992b418537101fcf9301a3cb322296a86099"} Mar 13 11:07:01.840549 master-0 kubenswrapper[33013]: I0313 11:07:01.840438 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-x9krd" podStartSLOduration=1.841797246 podStartE2EDuration="5.84041385s" podCreationTimestamp="2026-03-13 11:06:56 +0000 UTC" firstStartedPulling="2026-03-13 11:06:57.158865775 +0000 UTC m=+600.634819124" lastFinishedPulling="2026-03-13 11:07:01.157482379 +0000 UTC m=+604.633435728" observedRunningTime="2026-03-13 11:07:01.839473094 +0000 UTC m=+605.315426443" watchObservedRunningTime="2026-03-13 11:07:01.84041385 +0000 UTC m=+605.316367189" Mar 13 11:07:03.440263 master-0 kubenswrapper[33013]: I0313 11:07:03.440173 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-8p2fw"] Mar 13 11:07:03.441333 master-0 kubenswrapper[33013]: I0313 11:07:03.441300 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-8p2fw" Mar 13 11:07:03.459580 master-0 kubenswrapper[33013]: I0313 11:07:03.459530 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-8p2fw"] Mar 13 11:07:03.539621 master-0 kubenswrapper[33013]: I0313 11:07:03.539488 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/900bf312-3ae6-4d58-8e5d-1201130c8ef5-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-8p2fw\" (UID: \"900bf312-3ae6-4d58-8e5d-1201130c8ef5\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8p2fw" Mar 13 11:07:03.540011 master-0 kubenswrapper[33013]: I0313 11:07:03.539672 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmcw5\" (UniqueName: \"kubernetes.io/projected/900bf312-3ae6-4d58-8e5d-1201130c8ef5-kube-api-access-qmcw5\") pod \"cert-manager-cainjector-5545bd876-8p2fw\" (UID: \"900bf312-3ae6-4d58-8e5d-1201130c8ef5\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8p2fw" Mar 13 11:07:03.643881 master-0 kubenswrapper[33013]: I0313 11:07:03.641042 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/900bf312-3ae6-4d58-8e5d-1201130c8ef5-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-8p2fw\" (UID: \"900bf312-3ae6-4d58-8e5d-1201130c8ef5\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8p2fw" Mar 13 11:07:03.643881 master-0 kubenswrapper[33013]: I0313 11:07:03.641110 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmcw5\" (UniqueName: \"kubernetes.io/projected/900bf312-3ae6-4d58-8e5d-1201130c8ef5-kube-api-access-qmcw5\") pod \"cert-manager-cainjector-5545bd876-8p2fw\" (UID: \"900bf312-3ae6-4d58-8e5d-1201130c8ef5\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8p2fw" Mar 13 11:07:03.691701 master-0 kubenswrapper[33013]: I0313 11:07:03.689014 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmcw5\" (UniqueName: \"kubernetes.io/projected/900bf312-3ae6-4d58-8e5d-1201130c8ef5-kube-api-access-qmcw5\") pod \"cert-manager-cainjector-5545bd876-8p2fw\" (UID: \"900bf312-3ae6-4d58-8e5d-1201130c8ef5\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8p2fw" Mar 13 11:07:03.701611 master-0 kubenswrapper[33013]: I0313 11:07:03.698472 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/900bf312-3ae6-4d58-8e5d-1201130c8ef5-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-8p2fw\" (UID: \"900bf312-3ae6-4d58-8e5d-1201130c8ef5\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8p2fw" Mar 13 11:07:03.787614 master-0 kubenswrapper[33013]: I0313 11:07:03.787000 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-8p2fw" Mar 13 11:07:04.318754 master-0 kubenswrapper[33013]: I0313 11:07:04.318695 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-8p2fw"] Mar 13 11:07:04.836720 master-0 kubenswrapper[33013]: I0313 11:07:04.835636 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-8p2fw" event={"ID":"900bf312-3ae6-4d58-8e5d-1201130c8ef5","Type":"ContainerStarted","Data":"6743d653d5eb1e9d812823e9367473a21ee046216a7d76fab606ab9199f98ab7"} Mar 13 11:07:05.948058 master-0 kubenswrapper[33013]: I0313 11:07:05.947987 33013 scope.go:117] "RemoveContainer" containerID="f64a190ab6bfcd5d71dd09d08481400c5646db74ded1e7ad4ac16e4a9b0b9632" Mar 13 11:07:06.858713 master-0 kubenswrapper[33013]: I0313 11:07:06.858652 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-tzmpm" event={"ID":"16033af4-14d4-4784-8c43-36b23c4d0b56","Type":"ContainerStarted","Data":"14f5415b9f665c89b348f02e8bb32ce3f0afc81337e07d0747444dbf83755690"} Mar 13 11:07:06.860195 master-0 kubenswrapper[33013]: I0313 11:07:06.860165 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-8p2fw" event={"ID":"900bf312-3ae6-4d58-8e5d-1201130c8ef5","Type":"ContainerStarted","Data":"faa55e4c6713d5ef0a712e51b082cd5e6985346670bf49dc84488f2dbd0023fb"} Mar 13 11:07:07.875452 master-0 kubenswrapper[33013]: I0313 11:07:07.873637 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-tzmpm" Mar 13 11:07:08.012617 master-0 kubenswrapper[33013]: I0313 11:07:08.011770 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-8p2fw" podStartSLOduration=2.715787935 podStartE2EDuration="5.011747869s" podCreationTimestamp="2026-03-13 11:07:03 +0000 UTC" firstStartedPulling="2026-03-13 11:07:04.326076283 +0000 UTC m=+607.802029632" lastFinishedPulling="2026-03-13 11:07:06.622036217 +0000 UTC m=+610.097989566" observedRunningTime="2026-03-13 11:07:08.00640471 +0000 UTC m=+611.482358059" watchObservedRunningTime="2026-03-13 11:07:08.011747869 +0000 UTC m=+611.487701218" Mar 13 11:07:08.012617 master-0 kubenswrapper[33013]: I0313 11:07:08.012302 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-tzmpm" podStartSLOduration=4.100253048 podStartE2EDuration="9.012296254s" podCreationTimestamp="2026-03-13 11:06:59 +0000 UTC" firstStartedPulling="2026-03-13 11:07:01.708041116 +0000 UTC m=+605.183994465" lastFinishedPulling="2026-03-13 11:07:06.620084322 +0000 UTC m=+610.096037671" observedRunningTime="2026-03-13 11:07:07.927172012 +0000 UTC m=+611.403125371" watchObservedRunningTime="2026-03-13 11:07:08.012296254 +0000 UTC m=+611.488249604" Mar 13 11:07:10.512287 master-0 kubenswrapper[33013]: I0313 11:07:10.512221 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-bvmnl"] Mar 13 11:07:10.521981 master-0 kubenswrapper[33013]: I0313 11:07:10.521915 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-bvmnl" Mar 13 11:07:10.549609 master-0 kubenswrapper[33013]: I0313 11:07:10.544470 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-bvmnl"] Mar 13 11:07:10.683335 master-0 kubenswrapper[33013]: I0313 11:07:10.683086 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35865c0c-24cb-4653-9622-459215f4ae2e-bound-sa-token\") pod \"cert-manager-545d4d4674-bvmnl\" (UID: \"35865c0c-24cb-4653-9622-459215f4ae2e\") " pod="cert-manager/cert-manager-545d4d4674-bvmnl" Mar 13 11:07:10.683335 master-0 kubenswrapper[33013]: I0313 11:07:10.683151 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpjj4\" (UniqueName: \"kubernetes.io/projected/35865c0c-24cb-4653-9622-459215f4ae2e-kube-api-access-mpjj4\") pod \"cert-manager-545d4d4674-bvmnl\" (UID: \"35865c0c-24cb-4653-9622-459215f4ae2e\") " pod="cert-manager/cert-manager-545d4d4674-bvmnl" Mar 13 11:07:10.785886 master-0 kubenswrapper[33013]: I0313 11:07:10.785679 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35865c0c-24cb-4653-9622-459215f4ae2e-bound-sa-token\") pod \"cert-manager-545d4d4674-bvmnl\" (UID: \"35865c0c-24cb-4653-9622-459215f4ae2e\") " pod="cert-manager/cert-manager-545d4d4674-bvmnl" Mar 13 11:07:10.785886 master-0 kubenswrapper[33013]: I0313 11:07:10.785744 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpjj4\" (UniqueName: \"kubernetes.io/projected/35865c0c-24cb-4653-9622-459215f4ae2e-kube-api-access-mpjj4\") pod \"cert-manager-545d4d4674-bvmnl\" (UID: \"35865c0c-24cb-4653-9622-459215f4ae2e\") " pod="cert-manager/cert-manager-545d4d4674-bvmnl" Mar 13 11:07:11.341564 master-0 kubenswrapper[33013]: I0313 11:07:11.341498 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpjj4\" (UniqueName: \"kubernetes.io/projected/35865c0c-24cb-4653-9622-459215f4ae2e-kube-api-access-mpjj4\") pod \"cert-manager-545d4d4674-bvmnl\" (UID: \"35865c0c-24cb-4653-9622-459215f4ae2e\") " pod="cert-manager/cert-manager-545d4d4674-bvmnl" Mar 13 11:07:11.357185 master-0 kubenswrapper[33013]: I0313 11:07:11.357065 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35865c0c-24cb-4653-9622-459215f4ae2e-bound-sa-token\") pod \"cert-manager-545d4d4674-bvmnl\" (UID: \"35865c0c-24cb-4653-9622-459215f4ae2e\") " pod="cert-manager/cert-manager-545d4d4674-bvmnl" Mar 13 11:07:11.437889 master-0 kubenswrapper[33013]: I0313 11:07:11.437823 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-bvmnl" Mar 13 11:07:11.942789 master-0 kubenswrapper[33013]: I0313 11:07:11.940287 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-bvmnl"] Mar 13 11:07:12.962101 master-0 kubenswrapper[33013]: I0313 11:07:12.962040 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-bvmnl" event={"ID":"35865c0c-24cb-4653-9622-459215f4ae2e","Type":"ContainerStarted","Data":"5be92e51b0997b06dccc7a9310842f2bdc547ca17f26d195ae3498a7d8783d63"} Mar 13 11:07:12.962681 master-0 kubenswrapper[33013]: I0313 11:07:12.962112 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-bvmnl" event={"ID":"35865c0c-24cb-4653-9622-459215f4ae2e","Type":"ContainerStarted","Data":"f5371fee40c7028414660ea34e796e76d318f6756282a725995e122e87aea529"} Mar 13 11:07:13.016217 master-0 kubenswrapper[33013]: I0313 11:07:13.016114 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-bvmnl" podStartSLOduration=3.016088158 podStartE2EDuration="3.016088158s" podCreationTimestamp="2026-03-13 11:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:07:13.011394167 +0000 UTC m=+616.487347516" watchObservedRunningTime="2026-03-13 11:07:13.016088158 +0000 UTC m=+616.492041527" Mar 13 11:07:15.019457 master-0 kubenswrapper[33013]: I0313 11:07:15.019396 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-tzmpm" Mar 13 11:07:15.752347 master-0 kubenswrapper[33013]: I0313 11:07:15.752282 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg"] Mar 13 11:07:15.761483 master-0 kubenswrapper[33013]: I0313 11:07:15.754154 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" Mar 13 11:07:15.761483 master-0 kubenswrapper[33013]: I0313 11:07:15.758655 33013 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 13 11:07:15.761483 master-0 kubenswrapper[33013]: I0313 11:07:15.761198 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 13 11:07:15.761483 master-0 kubenswrapper[33013]: I0313 11:07:15.761304 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 13 11:07:15.761483 master-0 kubenswrapper[33013]: I0313 11:07:15.761204 33013 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 13 11:07:15.764516 master-0 kubenswrapper[33013]: I0313 11:07:15.764397 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg"] Mar 13 11:07:15.821039 master-0 kubenswrapper[33013]: I0313 11:07:15.819623 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwpdt\" (UniqueName: \"kubernetes.io/projected/87a8e83a-c728-4eb3-be94-6a2f1a39bf0a-kube-api-access-dwpdt\") pod \"metallb-operator-controller-manager-5cd5bd5576-k4drg\" (UID: \"87a8e83a-c728-4eb3-be94-6a2f1a39bf0a\") " pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" Mar 13 11:07:15.821039 master-0 kubenswrapper[33013]: I0313 11:07:15.819705 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/87a8e83a-c728-4eb3-be94-6a2f1a39bf0a-apiservice-cert\") pod \"metallb-operator-controller-manager-5cd5bd5576-k4drg\" (UID: \"87a8e83a-c728-4eb3-be94-6a2f1a39bf0a\") " pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" Mar 13 11:07:15.821039 master-0 kubenswrapper[33013]: I0313 11:07:15.819733 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/87a8e83a-c728-4eb3-be94-6a2f1a39bf0a-webhook-cert\") pod \"metallb-operator-controller-manager-5cd5bd5576-k4drg\" (UID: \"87a8e83a-c728-4eb3-be94-6a2f1a39bf0a\") " pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" Mar 13 11:07:15.921617 master-0 kubenswrapper[33013]: I0313 11:07:15.921540 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwpdt\" (UniqueName: \"kubernetes.io/projected/87a8e83a-c728-4eb3-be94-6a2f1a39bf0a-kube-api-access-dwpdt\") pod \"metallb-operator-controller-manager-5cd5bd5576-k4drg\" (UID: \"87a8e83a-c728-4eb3-be94-6a2f1a39bf0a\") " pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" Mar 13 11:07:15.921617 master-0 kubenswrapper[33013]: I0313 11:07:15.921627 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/87a8e83a-c728-4eb3-be94-6a2f1a39bf0a-apiservice-cert\") pod \"metallb-operator-controller-manager-5cd5bd5576-k4drg\" (UID: \"87a8e83a-c728-4eb3-be94-6a2f1a39bf0a\") " pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" Mar 13 11:07:15.921904 master-0 kubenswrapper[33013]: I0313 11:07:15.921654 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/87a8e83a-c728-4eb3-be94-6a2f1a39bf0a-webhook-cert\") pod \"metallb-operator-controller-manager-5cd5bd5576-k4drg\" (UID: \"87a8e83a-c728-4eb3-be94-6a2f1a39bf0a\") " pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" Mar 13 11:07:15.949456 master-0 kubenswrapper[33013]: I0313 11:07:15.949410 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/87a8e83a-c728-4eb3-be94-6a2f1a39bf0a-apiservice-cert\") pod \"metallb-operator-controller-manager-5cd5bd5576-k4drg\" (UID: \"87a8e83a-c728-4eb3-be94-6a2f1a39bf0a\") " pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" Mar 13 11:07:15.950375 master-0 kubenswrapper[33013]: I0313 11:07:15.950328 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/87a8e83a-c728-4eb3-be94-6a2f1a39bf0a-webhook-cert\") pod \"metallb-operator-controller-manager-5cd5bd5576-k4drg\" (UID: \"87a8e83a-c728-4eb3-be94-6a2f1a39bf0a\") " pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" Mar 13 11:07:15.966770 master-0 kubenswrapper[33013]: I0313 11:07:15.966721 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwpdt\" (UniqueName: \"kubernetes.io/projected/87a8e83a-c728-4eb3-be94-6a2f1a39bf0a-kube-api-access-dwpdt\") pod \"metallb-operator-controller-manager-5cd5bd5576-k4drg\" (UID: \"87a8e83a-c728-4eb3-be94-6a2f1a39bf0a\") " pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" Mar 13 11:07:16.093612 master-0 kubenswrapper[33013]: I0313 11:07:16.093249 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" Mar 13 11:07:16.603613 master-0 kubenswrapper[33013]: I0313 11:07:16.601566 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2"] Mar 13 11:07:16.603613 master-0 kubenswrapper[33013]: I0313 11:07:16.602980 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" Mar 13 11:07:16.607724 master-0 kubenswrapper[33013]: I0313 11:07:16.606490 33013 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 13 11:07:16.607724 master-0 kubenswrapper[33013]: I0313 11:07:16.606906 33013 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 13 11:07:16.620721 master-0 kubenswrapper[33013]: I0313 11:07:16.620618 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2"] Mar 13 11:07:16.656372 master-0 kubenswrapper[33013]: I0313 11:07:16.656264 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68rmw\" (UniqueName: \"kubernetes.io/projected/373ced2b-7936-4842-b464-4287e4438d09-kube-api-access-68rmw\") pod \"metallb-operator-webhook-server-665cc46b55-s5pc2\" (UID: \"373ced2b-7936-4842-b464-4287e4438d09\") " pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" Mar 13 11:07:16.656734 master-0 kubenswrapper[33013]: I0313 11:07:16.656391 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/373ced2b-7936-4842-b464-4287e4438d09-webhook-cert\") pod \"metallb-operator-webhook-server-665cc46b55-s5pc2\" (UID: \"373ced2b-7936-4842-b464-4287e4438d09\") " pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" Mar 13 11:07:16.656734 master-0 kubenswrapper[33013]: I0313 11:07:16.656444 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/373ced2b-7936-4842-b464-4287e4438d09-apiservice-cert\") pod \"metallb-operator-webhook-server-665cc46b55-s5pc2\" (UID: \"373ced2b-7936-4842-b464-4287e4438d09\") " pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" Mar 13 11:07:16.758720 master-0 kubenswrapper[33013]: I0313 11:07:16.757915 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68rmw\" (UniqueName: \"kubernetes.io/projected/373ced2b-7936-4842-b464-4287e4438d09-kube-api-access-68rmw\") pod \"metallb-operator-webhook-server-665cc46b55-s5pc2\" (UID: \"373ced2b-7936-4842-b464-4287e4438d09\") " pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" Mar 13 11:07:16.758720 master-0 kubenswrapper[33013]: I0313 11:07:16.758196 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/373ced2b-7936-4842-b464-4287e4438d09-webhook-cert\") pod \"metallb-operator-webhook-server-665cc46b55-s5pc2\" (UID: \"373ced2b-7936-4842-b464-4287e4438d09\") " pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" Mar 13 11:07:16.758720 master-0 kubenswrapper[33013]: I0313 11:07:16.758657 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/373ced2b-7936-4842-b464-4287e4438d09-apiservice-cert\") pod \"metallb-operator-webhook-server-665cc46b55-s5pc2\" (UID: \"373ced2b-7936-4842-b464-4287e4438d09\") " pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" Mar 13 11:07:16.762958 master-0 kubenswrapper[33013]: I0313 11:07:16.762930 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/373ced2b-7936-4842-b464-4287e4438d09-apiservice-cert\") pod \"metallb-operator-webhook-server-665cc46b55-s5pc2\" (UID: \"373ced2b-7936-4842-b464-4287e4438d09\") " pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" Mar 13 11:07:16.766369 master-0 kubenswrapper[33013]: I0313 11:07:16.764999 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/373ced2b-7936-4842-b464-4287e4438d09-webhook-cert\") pod \"metallb-operator-webhook-server-665cc46b55-s5pc2\" (UID: \"373ced2b-7936-4842-b464-4287e4438d09\") " pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" Mar 13 11:07:16.769800 master-0 kubenswrapper[33013]: W0313 11:07:16.769755 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87a8e83a_c728_4eb3_be94_6a2f1a39bf0a.slice/crio-91de3b83fc8d089385ddf71f666c57c6cd0ff988b83b937ae3b9e1cc13641c3b WatchSource:0}: Error finding container 91de3b83fc8d089385ddf71f666c57c6cd0ff988b83b937ae3b9e1cc13641c3b: Status 404 returned error can't find the container with id 91de3b83fc8d089385ddf71f666c57c6cd0ff988b83b937ae3b9e1cc13641c3b Mar 13 11:07:16.769990 master-0 kubenswrapper[33013]: I0313 11:07:16.769929 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg"] Mar 13 11:07:16.790305 master-0 kubenswrapper[33013]: I0313 11:07:16.790253 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68rmw\" (UniqueName: \"kubernetes.io/projected/373ced2b-7936-4842-b464-4287e4438d09-kube-api-access-68rmw\") pod \"metallb-operator-webhook-server-665cc46b55-s5pc2\" (UID: \"373ced2b-7936-4842-b464-4287e4438d09\") " pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" Mar 13 11:07:16.936236 master-0 kubenswrapper[33013]: I0313 11:07:16.936061 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" Mar 13 11:07:17.028007 master-0 kubenswrapper[33013]: I0313 11:07:17.027947 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" event={"ID":"87a8e83a-c728-4eb3-be94-6a2f1a39bf0a","Type":"ContainerStarted","Data":"91de3b83fc8d089385ddf71f666c57c6cd0ff988b83b937ae3b9e1cc13641c3b"} Mar 13 11:07:17.438780 master-0 kubenswrapper[33013]: I0313 11:07:17.438711 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2"] Mar 13 11:07:18.036900 master-0 kubenswrapper[33013]: I0313 11:07:18.036794 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" event={"ID":"373ced2b-7936-4842-b464-4287e4438d09","Type":"ContainerStarted","Data":"254f2fc23fb60cc014e2a9dee7a79e0534f03a214b2bfa0e7f120833cae25277"} Mar 13 11:07:23.648608 master-0 kubenswrapper[33013]: I0313 11:07:23.648135 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zs9p4"] Mar 13 11:07:23.651143 master-0 kubenswrapper[33013]: I0313 11:07:23.649442 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zs9p4" Mar 13 11:07:23.651900 master-0 kubenswrapper[33013]: I0313 11:07:23.651870 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 13 11:07:23.652151 master-0 kubenswrapper[33013]: I0313 11:07:23.652130 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 13 11:07:23.705609 master-0 kubenswrapper[33013]: I0313 11:07:23.697283 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zs9p4"] Mar 13 11:07:23.779933 master-0 kubenswrapper[33013]: I0313 11:07:23.779001 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pvfj\" (UniqueName: \"kubernetes.io/projected/668d7a09-1321-4539-a86e-87605de91e73-kube-api-access-7pvfj\") pod \"obo-prometheus-operator-68bc856cb9-zs9p4\" (UID: \"668d7a09-1321-4539-a86e-87605de91e73\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zs9p4" Mar 13 11:07:23.881634 master-0 kubenswrapper[33013]: I0313 11:07:23.880726 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pvfj\" (UniqueName: \"kubernetes.io/projected/668d7a09-1321-4539-a86e-87605de91e73-kube-api-access-7pvfj\") pod \"obo-prometheus-operator-68bc856cb9-zs9p4\" (UID: \"668d7a09-1321-4539-a86e-87605de91e73\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zs9p4" Mar 13 11:07:23.984070 master-0 kubenswrapper[33013]: I0313 11:07:23.984011 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pvfj\" (UniqueName: \"kubernetes.io/projected/668d7a09-1321-4539-a86e-87605de91e73-kube-api-access-7pvfj\") pod \"obo-prometheus-operator-68bc856cb9-zs9p4\" (UID: \"668d7a09-1321-4539-a86e-87605de91e73\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zs9p4" Mar 13 11:07:23.994484 master-0 kubenswrapper[33013]: I0313 11:07:23.994432 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb"] Mar 13 11:07:23.995548 master-0 kubenswrapper[33013]: I0313 11:07:23.995524 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb" Mar 13 11:07:23.999257 master-0 kubenswrapper[33013]: I0313 11:07:23.999217 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 13 11:07:24.018241 master-0 kubenswrapper[33013]: I0313 11:07:24.016854 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq"] Mar 13 11:07:24.018241 master-0 kubenswrapper[33013]: I0313 11:07:24.018000 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq" Mar 13 11:07:24.064609 master-0 kubenswrapper[33013]: I0313 11:07:24.059766 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb"] Mar 13 11:07:24.094611 master-0 kubenswrapper[33013]: I0313 11:07:24.077220 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq"] Mar 13 11:07:24.094611 master-0 kubenswrapper[33013]: I0313 11:07:24.084412 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4836216-bfc4-4254-b5be-68ed1c578c35-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb\" (UID: \"c4836216-bfc4-4254-b5be-68ed1c578c35\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb" Mar 13 11:07:24.094611 master-0 kubenswrapper[33013]: I0313 11:07:24.084492 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/794136ef-2e20-409a-8fd2-419ad1d76e0f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq\" (UID: \"794136ef-2e20-409a-8fd2-419ad1d76e0f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq" Mar 13 11:07:24.094611 master-0 kubenswrapper[33013]: I0313 11:07:24.084526 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4836216-bfc4-4254-b5be-68ed1c578c35-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb\" (UID: \"c4836216-bfc4-4254-b5be-68ed1c578c35\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb" Mar 13 11:07:24.094611 master-0 kubenswrapper[33013]: I0313 11:07:24.084561 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/794136ef-2e20-409a-8fd2-419ad1d76e0f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq\" (UID: \"794136ef-2e20-409a-8fd2-419ad1d76e0f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq" Mar 13 11:07:24.098603 master-0 kubenswrapper[33013]: I0313 11:07:24.097853 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zs9p4" Mar 13 11:07:24.127610 master-0 kubenswrapper[33013]: I0313 11:07:24.127165 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" event={"ID":"87a8e83a-c728-4eb3-be94-6a2f1a39bf0a","Type":"ContainerStarted","Data":"2103767cd5190d849fe145672b3c9c48dbb0e3f8c75ea29d776d8ba8bf920027"} Mar 13 11:07:24.133038 master-0 kubenswrapper[33013]: I0313 11:07:24.129754 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" Mar 13 11:07:24.179609 master-0 kubenswrapper[33013]: I0313 11:07:24.179381 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-bmtjs"] Mar 13 11:07:24.197644 master-0 kubenswrapper[33013]: I0313 11:07:24.187973 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4836216-bfc4-4254-b5be-68ed1c578c35-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb\" (UID: \"c4836216-bfc4-4254-b5be-68ed1c578c35\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb" Mar 13 11:07:24.197644 master-0 kubenswrapper[33013]: I0313 11:07:24.188061 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/794136ef-2e20-409a-8fd2-419ad1d76e0f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq\" (UID: \"794136ef-2e20-409a-8fd2-419ad1d76e0f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq" Mar 13 11:07:24.197644 master-0 kubenswrapper[33013]: I0313 11:07:24.188171 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4836216-bfc4-4254-b5be-68ed1c578c35-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb\" (UID: \"c4836216-bfc4-4254-b5be-68ed1c578c35\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb" Mar 13 11:07:24.197644 master-0 kubenswrapper[33013]: I0313 11:07:24.188221 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/794136ef-2e20-409a-8fd2-419ad1d76e0f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq\" (UID: \"794136ef-2e20-409a-8fd2-419ad1d76e0f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq" Mar 13 11:07:24.200357 master-0 kubenswrapper[33013]: I0313 11:07:24.199717 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-bmtjs" Mar 13 11:07:24.205628 master-0 kubenswrapper[33013]: I0313 11:07:24.200950 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/794136ef-2e20-409a-8fd2-419ad1d76e0f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq\" (UID: \"794136ef-2e20-409a-8fd2-419ad1d76e0f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq" Mar 13 11:07:24.205628 master-0 kubenswrapper[33013]: I0313 11:07:24.201874 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/794136ef-2e20-409a-8fd2-419ad1d76e0f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq\" (UID: \"794136ef-2e20-409a-8fd2-419ad1d76e0f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq" Mar 13 11:07:24.208304 master-0 kubenswrapper[33013]: I0313 11:07:24.206415 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4836216-bfc4-4254-b5be-68ed1c578c35-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb\" (UID: \"c4836216-bfc4-4254-b5be-68ed1c578c35\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb" Mar 13 11:07:24.208304 master-0 kubenswrapper[33013]: I0313 11:07:24.208145 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 13 11:07:24.215704 master-0 kubenswrapper[33013]: I0313 11:07:24.212024 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4836216-bfc4-4254-b5be-68ed1c578c35-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb\" (UID: \"c4836216-bfc4-4254-b5be-68ed1c578c35\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb" Mar 13 11:07:24.229327 master-0 kubenswrapper[33013]: I0313 11:07:24.229229 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" podStartSLOduration=2.412393911 podStartE2EDuration="9.229179513s" podCreationTimestamp="2026-03-13 11:07:15 +0000 UTC" firstStartedPulling="2026-03-13 11:07:16.774311814 +0000 UTC m=+620.250265163" lastFinishedPulling="2026-03-13 11:07:23.591097416 +0000 UTC m=+627.067050765" observedRunningTime="2026-03-13 11:07:24.187079515 +0000 UTC m=+627.663032874" watchObservedRunningTime="2026-03-13 11:07:24.229179513 +0000 UTC m=+627.705132862" Mar 13 11:07:24.230682 master-0 kubenswrapper[33013]: I0313 11:07:24.230646 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq" Mar 13 11:07:24.263257 master-0 kubenswrapper[33013]: I0313 11:07:24.263202 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-bmtjs"] Mar 13 11:07:24.292140 master-0 kubenswrapper[33013]: I0313 11:07:24.291622 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ba64afa-befe-4719-a777-f9364c2f956e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-bmtjs\" (UID: \"7ba64afa-befe-4719-a777-f9364c2f956e\") " pod="openshift-operators/observability-operator-59bdc8b94-bmtjs" Mar 13 11:07:24.292140 master-0 kubenswrapper[33013]: I0313 11:07:24.291776 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq2m6\" (UniqueName: \"kubernetes.io/projected/7ba64afa-befe-4719-a777-f9364c2f956e-kube-api-access-lq2m6\") pod \"observability-operator-59bdc8b94-bmtjs\" (UID: \"7ba64afa-befe-4719-a777-f9364c2f956e\") " pod="openshift-operators/observability-operator-59bdc8b94-bmtjs" Mar 13 11:07:24.395696 master-0 kubenswrapper[33013]: I0313 11:07:24.394720 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq2m6\" (UniqueName: \"kubernetes.io/projected/7ba64afa-befe-4719-a777-f9364c2f956e-kube-api-access-lq2m6\") pod \"observability-operator-59bdc8b94-bmtjs\" (UID: \"7ba64afa-befe-4719-a777-f9364c2f956e\") " pod="openshift-operators/observability-operator-59bdc8b94-bmtjs" Mar 13 11:07:24.395696 master-0 kubenswrapper[33013]: I0313 11:07:24.394960 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ba64afa-befe-4719-a777-f9364c2f956e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-bmtjs\" (UID: \"7ba64afa-befe-4719-a777-f9364c2f956e\") " pod="openshift-operators/observability-operator-59bdc8b94-bmtjs" Mar 13 11:07:24.404093 master-0 kubenswrapper[33013]: I0313 11:07:24.402356 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ba64afa-befe-4719-a777-f9364c2f956e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-bmtjs\" (UID: \"7ba64afa-befe-4719-a777-f9364c2f956e\") " pod="openshift-operators/observability-operator-59bdc8b94-bmtjs" Mar 13 11:07:24.415377 master-0 kubenswrapper[33013]: I0313 11:07:24.414534 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7sv9k"] Mar 13 11:07:24.415722 master-0 kubenswrapper[33013]: I0313 11:07:24.415690 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7sv9k" Mar 13 11:07:24.452805 master-0 kubenswrapper[33013]: I0313 11:07:24.448131 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb" Mar 13 11:07:24.532688 master-0 kubenswrapper[33013]: I0313 11:07:24.532502 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfm5l\" (UniqueName: \"kubernetes.io/projected/a0e66140-1f7b-4bef-8ca9-406569dc481e-kube-api-access-sfm5l\") pod \"perses-operator-5bf474d74f-7sv9k\" (UID: \"a0e66140-1f7b-4bef-8ca9-406569dc481e\") " pod="openshift-operators/perses-operator-5bf474d74f-7sv9k" Mar 13 11:07:24.532688 master-0 kubenswrapper[33013]: I0313 11:07:24.532637 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a0e66140-1f7b-4bef-8ca9-406569dc481e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7sv9k\" (UID: \"a0e66140-1f7b-4bef-8ca9-406569dc481e\") " pod="openshift-operators/perses-operator-5bf474d74f-7sv9k" Mar 13 11:07:24.547250 master-0 kubenswrapper[33013]: I0313 11:07:24.537385 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq2m6\" (UniqueName: \"kubernetes.io/projected/7ba64afa-befe-4719-a777-f9364c2f956e-kube-api-access-lq2m6\") pod \"observability-operator-59bdc8b94-bmtjs\" (UID: \"7ba64afa-befe-4719-a777-f9364c2f956e\") " pod="openshift-operators/observability-operator-59bdc8b94-bmtjs" Mar 13 11:07:24.590071 master-0 kubenswrapper[33013]: I0313 11:07:24.579431 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7sv9k"] Mar 13 11:07:24.639921 master-0 kubenswrapper[33013]: I0313 11:07:24.639202 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a0e66140-1f7b-4bef-8ca9-406569dc481e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7sv9k\" (UID: \"a0e66140-1f7b-4bef-8ca9-406569dc481e\") " pod="openshift-operators/perses-operator-5bf474d74f-7sv9k" Mar 13 11:07:24.639921 master-0 kubenswrapper[33013]: I0313 11:07:24.639366 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfm5l\" (UniqueName: \"kubernetes.io/projected/a0e66140-1f7b-4bef-8ca9-406569dc481e-kube-api-access-sfm5l\") pod \"perses-operator-5bf474d74f-7sv9k\" (UID: \"a0e66140-1f7b-4bef-8ca9-406569dc481e\") " pod="openshift-operators/perses-operator-5bf474d74f-7sv9k" Mar 13 11:07:24.639921 master-0 kubenswrapper[33013]: I0313 11:07:24.639661 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-bmtjs" Mar 13 11:07:24.693983 master-0 kubenswrapper[33013]: I0313 11:07:24.685336 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a0e66140-1f7b-4bef-8ca9-406569dc481e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7sv9k\" (UID: \"a0e66140-1f7b-4bef-8ca9-406569dc481e\") " pod="openshift-operators/perses-operator-5bf474d74f-7sv9k" Mar 13 11:07:24.942296 master-0 kubenswrapper[33013]: I0313 11:07:24.939786 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfm5l\" (UniqueName: \"kubernetes.io/projected/a0e66140-1f7b-4bef-8ca9-406569dc481e-kube-api-access-sfm5l\") pod \"perses-operator-5bf474d74f-7sv9k\" (UID: \"a0e66140-1f7b-4bef-8ca9-406569dc481e\") " pod="openshift-operators/perses-operator-5bf474d74f-7sv9k" Mar 13 11:07:24.971992 master-0 kubenswrapper[33013]: I0313 11:07:24.970319 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zs9p4"] Mar 13 11:07:25.009096 master-0 kubenswrapper[33013]: I0313 11:07:25.009036 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq"] Mar 13 11:07:25.034185 master-0 kubenswrapper[33013]: W0313 11:07:25.033776 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod794136ef_2e20_409a_8fd2_419ad1d76e0f.slice/crio-c46d9af254c05278cb2438715334abebc52f707013f1b8aea559d3be068cc817 WatchSource:0}: Error finding container c46d9af254c05278cb2438715334abebc52f707013f1b8aea559d3be068cc817: Status 404 returned error can't find the container with id c46d9af254c05278cb2438715334abebc52f707013f1b8aea559d3be068cc817 Mar 13 11:07:25.140450 master-0 kubenswrapper[33013]: I0313 11:07:25.139559 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb"] Mar 13 11:07:25.155732 master-0 kubenswrapper[33013]: I0313 11:07:25.145647 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq" event={"ID":"794136ef-2e20-409a-8fd2-419ad1d76e0f","Type":"ContainerStarted","Data":"c46d9af254c05278cb2438715334abebc52f707013f1b8aea559d3be068cc817"} Mar 13 11:07:25.155732 master-0 kubenswrapper[33013]: I0313 11:07:25.148816 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" event={"ID":"373ced2b-7936-4842-b464-4287e4438d09","Type":"ContainerStarted","Data":"09400d0740e311f383754b5fbc5a552d990132bce9e753f83084ed417afb0d08"} Mar 13 11:07:25.155732 master-0 kubenswrapper[33013]: I0313 11:07:25.149976 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" Mar 13 11:07:25.155732 master-0 kubenswrapper[33013]: W0313 11:07:25.152628 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4836216_bfc4_4254_b5be_68ed1c578c35.slice/crio-91b23fe1478ed15c881ab90fce0cf3d5f2ab1e1eb04e2cecc6144847ae24614c WatchSource:0}: Error finding container 91b23fe1478ed15c881ab90fce0cf3d5f2ab1e1eb04e2cecc6144847ae24614c: Status 404 returned error can't find the container with id 91b23fe1478ed15c881ab90fce0cf3d5f2ab1e1eb04e2cecc6144847ae24614c Mar 13 11:07:25.156230 master-0 kubenswrapper[33013]: I0313 11:07:25.156110 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zs9p4" event={"ID":"668d7a09-1321-4539-a86e-87605de91e73","Type":"ContainerStarted","Data":"1298489cca80c2bdf22737be53f3b92653a22ccc10b19d1643f8ce39a5e9dc9b"} Mar 13 11:07:25.190619 master-0 kubenswrapper[33013]: I0313 11:07:25.186536 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" podStartSLOduration=2.751422615 podStartE2EDuration="9.186518945s" podCreationTimestamp="2026-03-13 11:07:16 +0000 UTC" firstStartedPulling="2026-03-13 11:07:17.449614243 +0000 UTC m=+620.925567592" lastFinishedPulling="2026-03-13 11:07:23.884710573 +0000 UTC m=+627.360663922" observedRunningTime="2026-03-13 11:07:25.1845276 +0000 UTC m=+628.660480939" watchObservedRunningTime="2026-03-13 11:07:25.186518945 +0000 UTC m=+628.662472294" Mar 13 11:07:25.221169 master-0 kubenswrapper[33013]: I0313 11:07:25.219994 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7sv9k" Mar 13 11:07:25.226982 master-0 kubenswrapper[33013]: I0313 11:07:25.226938 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-bmtjs"] Mar 13 11:07:25.843855 master-0 kubenswrapper[33013]: W0313 11:07:25.843773 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0e66140_1f7b_4bef_8ca9_406569dc481e.slice/crio-71d5f5fb67c5d4ceaaede2d8ffa7c9f85950218fa567a33903e560c44d509507 WatchSource:0}: Error finding container 71d5f5fb67c5d4ceaaede2d8ffa7c9f85950218fa567a33903e560c44d509507: Status 404 returned error can't find the container with id 71d5f5fb67c5d4ceaaede2d8ffa7c9f85950218fa567a33903e560c44d509507 Mar 13 11:07:25.851459 master-0 kubenswrapper[33013]: I0313 11:07:25.851400 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7sv9k"] Mar 13 11:07:26.172914 master-0 kubenswrapper[33013]: I0313 11:07:26.172253 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-bmtjs" event={"ID":"7ba64afa-befe-4719-a777-f9364c2f956e","Type":"ContainerStarted","Data":"bfb206afef7c2eea4b30e7df8b0e600936fb00dc4f8856e2af45cd22ba0aff68"} Mar 13 11:07:26.179411 master-0 kubenswrapper[33013]: I0313 11:07:26.178937 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb" event={"ID":"c4836216-bfc4-4254-b5be-68ed1c578c35","Type":"ContainerStarted","Data":"91b23fe1478ed15c881ab90fce0cf3d5f2ab1e1eb04e2cecc6144847ae24614c"} Mar 13 11:07:26.182774 master-0 kubenswrapper[33013]: I0313 11:07:26.181809 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7sv9k" event={"ID":"a0e66140-1f7b-4bef-8ca9-406569dc481e","Type":"ContainerStarted","Data":"71d5f5fb67c5d4ceaaede2d8ffa7c9f85950218fa567a33903e560c44d509507"} Mar 13 11:07:36.949716 master-0 kubenswrapper[33013]: I0313 11:07:36.948749 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-665cc46b55-s5pc2" Mar 13 11:07:39.353735 master-0 kubenswrapper[33013]: I0313 11:07:39.351801 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-bmtjs" Mar 13 11:07:39.353735 master-0 kubenswrapper[33013]: I0313 11:07:39.352315 33013 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-bmtjs container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.128.0.129:8081/healthz\": dial tcp 10.128.0.129:8081: connect: connection refused" start-of-body= Mar 13 11:07:39.353735 master-0 kubenswrapper[33013]: I0313 11:07:39.352369 33013 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-bmtjs" podUID="7ba64afa-befe-4719-a777-f9364c2f956e" containerName="operator" probeResult="failure" output="Get \"http://10.128.0.129:8081/healthz\": dial tcp 10.128.0.129:8081: connect: connection refused" Mar 13 11:07:39.363660 master-0 kubenswrapper[33013]: I0313 11:07:39.363521 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb" event={"ID":"c4836216-bfc4-4254-b5be-68ed1c578c35","Type":"ContainerStarted","Data":"457986bd549e56a641a805738d3d97897bcd7e01a0d79b04be5d3fe9438f7c13"} Mar 13 11:07:39.370707 master-0 kubenswrapper[33013]: I0313 11:07:39.370539 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq" event={"ID":"794136ef-2e20-409a-8fd2-419ad1d76e0f","Type":"ContainerStarted","Data":"ec146afdd54ad52d487566cd80c936379ca417dfaa0c415aa8266e863a0b5199"} Mar 13 11:07:39.412731 master-0 kubenswrapper[33013]: I0313 11:07:39.412323 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-bmtjs" podStartSLOduration=1.801209778 podStartE2EDuration="15.412295011s" podCreationTimestamp="2026-03-13 11:07:24 +0000 UTC" firstStartedPulling="2026-03-13 11:07:25.2778082 +0000 UTC m=+628.753761549" lastFinishedPulling="2026-03-13 11:07:38.888893433 +0000 UTC m=+642.364846782" observedRunningTime="2026-03-13 11:07:39.397800745 +0000 UTC m=+642.873754104" watchObservedRunningTime="2026-03-13 11:07:39.412295011 +0000 UTC m=+642.888248360" Mar 13 11:07:39.451670 master-0 kubenswrapper[33013]: I0313 11:07:39.448023 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-tjwvq" podStartSLOduration=2.6433962109999998 podStartE2EDuration="16.44799741s" podCreationTimestamp="2026-03-13 11:07:23 +0000 UTC" firstStartedPulling="2026-03-13 11:07:25.058025119 +0000 UTC m=+628.533978468" lastFinishedPulling="2026-03-13 11:07:38.862626318 +0000 UTC m=+642.338579667" observedRunningTime="2026-03-13 11:07:39.446994512 +0000 UTC m=+642.922947861" watchObservedRunningTime="2026-03-13 11:07:39.44799741 +0000 UTC m=+642.923950759" Mar 13 11:07:39.560753 master-0 kubenswrapper[33013]: I0313 11:07:39.553530 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69d745ddfb-m6fbb" podStartSLOduration=2.880287091 podStartE2EDuration="16.553503763s" podCreationTimestamp="2026-03-13 11:07:23 +0000 UTC" firstStartedPulling="2026-03-13 11:07:25.160569409 +0000 UTC m=+628.636522758" lastFinishedPulling="2026-03-13 11:07:38.833786081 +0000 UTC m=+642.309739430" observedRunningTime="2026-03-13 11:07:39.538205785 +0000 UTC m=+643.014159134" watchObservedRunningTime="2026-03-13 11:07:39.553503763 +0000 UTC m=+643.029457112" Mar 13 11:07:40.383114 master-0 kubenswrapper[33013]: I0313 11:07:40.383050 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zs9p4" event={"ID":"668d7a09-1321-4539-a86e-87605de91e73","Type":"ContainerStarted","Data":"38ebc7330ed986a52e6dae19c55be438613178f4ba39b12701589900551d244d"} Mar 13 11:07:40.386300 master-0 kubenswrapper[33013]: I0313 11:07:40.386241 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-bmtjs" event={"ID":"7ba64afa-befe-4719-a777-f9364c2f956e","Type":"ContainerStarted","Data":"8a6c2eb58c1dea70c048d26072ac8e9f740a1ce60df55b8a5219b645e83a2c23"} Mar 13 11:07:40.390467 master-0 kubenswrapper[33013]: I0313 11:07:40.389060 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7sv9k" event={"ID":"a0e66140-1f7b-4bef-8ca9-406569dc481e","Type":"ContainerStarted","Data":"770696f614ffb897be9bdd79324765b6dd25ac66e9f6f2a024bf1f643105e02f"} Mar 13 11:07:40.408681 master-0 kubenswrapper[33013]: I0313 11:07:40.407397 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zs9p4" podStartSLOduration=3.534292864 podStartE2EDuration="17.407376889s" podCreationTimestamp="2026-03-13 11:07:23 +0000 UTC" firstStartedPulling="2026-03-13 11:07:24.990748557 +0000 UTC m=+628.466701906" lastFinishedPulling="2026-03-13 11:07:38.863832582 +0000 UTC m=+642.339785931" observedRunningTime="2026-03-13 11:07:40.405902028 +0000 UTC m=+643.881855387" watchObservedRunningTime="2026-03-13 11:07:40.407376889 +0000 UTC m=+643.883330238" Mar 13 11:07:40.440692 master-0 kubenswrapper[33013]: I0313 11:07:40.440605 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-bmtjs" Mar 13 11:07:40.445019 master-0 kubenswrapper[33013]: I0313 11:07:40.444931 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-7sv9k" podStartSLOduration=3.457697107 podStartE2EDuration="16.4449117s" podCreationTimestamp="2026-03-13 11:07:24 +0000 UTC" firstStartedPulling="2026-03-13 11:07:25.84772486 +0000 UTC m=+629.323678209" lastFinishedPulling="2026-03-13 11:07:38.834939453 +0000 UTC m=+642.310892802" observedRunningTime="2026-03-13 11:07:40.441463343 +0000 UTC m=+643.917416692" watchObservedRunningTime="2026-03-13 11:07:40.4449117 +0000 UTC m=+643.920865049" Mar 13 11:07:41.396116 master-0 kubenswrapper[33013]: I0313 11:07:41.396064 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-7sv9k" Mar 13 11:07:45.225763 master-0 kubenswrapper[33013]: I0313 11:07:45.225694 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-7sv9k" Mar 13 11:07:56.096906 master-0 kubenswrapper[33013]: I0313 11:07:56.096854 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5cd5bd5576-k4drg" Mar 13 11:08:03.658994 master-0 kubenswrapper[33013]: I0313 11:08:03.658805 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d"] Mar 13 11:08:03.664564 master-0 kubenswrapper[33013]: I0313 11:08:03.660028 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d" Mar 13 11:08:03.665307 master-0 kubenswrapper[33013]: I0313 11:08:03.665238 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fcf6e5ad-527f-422b-a88f-c91c66625546-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-mmj7d\" (UID: \"fcf6e5ad-527f-422b-a88f-c91c66625546\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d" Mar 13 11:08:03.665390 master-0 kubenswrapper[33013]: I0313 11:08:03.665365 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9dcc\" (UniqueName: \"kubernetes.io/projected/fcf6e5ad-527f-422b-a88f-c91c66625546-kube-api-access-p9dcc\") pod \"frr-k8s-webhook-server-bcc4b6f68-mmj7d\" (UID: \"fcf6e5ad-527f-422b-a88f-c91c66625546\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d" Mar 13 11:08:03.670723 master-0 kubenswrapper[33013]: I0313 11:08:03.669700 33013 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 13 11:08:03.682070 master-0 kubenswrapper[33013]: I0313 11:08:03.681985 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d"] Mar 13 11:08:03.710360 master-0 kubenswrapper[33013]: I0313 11:08:03.710297 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-z9dtp"] Mar 13 11:08:03.718827 master-0 kubenswrapper[33013]: I0313 11:08:03.718752 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.727159 master-0 kubenswrapper[33013]: I0313 11:08:03.727114 33013 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 13 11:08:03.727489 master-0 kubenswrapper[33013]: I0313 11:08:03.727299 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 13 11:08:03.767219 master-0 kubenswrapper[33013]: I0313 11:08:03.767046 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fcf6e5ad-527f-422b-a88f-c91c66625546-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-mmj7d\" (UID: \"fcf6e5ad-527f-422b-a88f-c91c66625546\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d" Mar 13 11:08:03.767219 master-0 kubenswrapper[33013]: I0313 11:08:03.767131 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/60249e0c-b4a7-4f2a-8271-f96ad477f42e-frr-conf\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.767219 master-0 kubenswrapper[33013]: I0313 11:08:03.767198 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9dcc\" (UniqueName: \"kubernetes.io/projected/fcf6e5ad-527f-422b-a88f-c91c66625546-kube-api-access-p9dcc\") pod \"frr-k8s-webhook-server-bcc4b6f68-mmj7d\" (UID: \"fcf6e5ad-527f-422b-a88f-c91c66625546\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d" Mar 13 11:08:03.767219 master-0 kubenswrapper[33013]: I0313 11:08:03.767227 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/60249e0c-b4a7-4f2a-8271-f96ad477f42e-frr-startup\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.767723 master-0 kubenswrapper[33013]: I0313 11:08:03.767262 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/60249e0c-b4a7-4f2a-8271-f96ad477f42e-reloader\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.767723 master-0 kubenswrapper[33013]: I0313 11:08:03.767303 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60249e0c-b4a7-4f2a-8271-f96ad477f42e-metrics-certs\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.767723 master-0 kubenswrapper[33013]: I0313 11:08:03.767368 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jp4n\" (UniqueName: \"kubernetes.io/projected/60249e0c-b4a7-4f2a-8271-f96ad477f42e-kube-api-access-6jp4n\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.767723 master-0 kubenswrapper[33013]: I0313 11:08:03.767402 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/60249e0c-b4a7-4f2a-8271-f96ad477f42e-frr-sockets\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.767723 master-0 kubenswrapper[33013]: I0313 11:08:03.767452 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/60249e0c-b4a7-4f2a-8271-f96ad477f42e-metrics\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.767723 master-0 kubenswrapper[33013]: E0313 11:08:03.767635 33013 secret.go:189] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 13 11:08:03.767723 master-0 kubenswrapper[33013]: E0313 11:08:03.767689 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcf6e5ad-527f-422b-a88f-c91c66625546-cert podName:fcf6e5ad-527f-422b-a88f-c91c66625546 nodeName:}" failed. No retries permitted until 2026-03-13 11:08:04.267669461 +0000 UTC m=+667.743622810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fcf6e5ad-527f-422b-a88f-c91c66625546-cert") pod "frr-k8s-webhook-server-bcc4b6f68-mmj7d" (UID: "fcf6e5ad-527f-422b-a88f-c91c66625546") : secret "frr-k8s-webhook-server-cert" not found Mar 13 11:08:03.794256 master-0 kubenswrapper[33013]: I0313 11:08:03.794195 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-tkcjf"] Mar 13 11:08:03.795748 master-0 kubenswrapper[33013]: I0313 11:08:03.795718 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tkcjf" Mar 13 11:08:03.797142 master-0 kubenswrapper[33013]: I0313 11:08:03.797093 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9dcc\" (UniqueName: \"kubernetes.io/projected/fcf6e5ad-527f-422b-a88f-c91c66625546-kube-api-access-p9dcc\") pod \"frr-k8s-webhook-server-bcc4b6f68-mmj7d\" (UID: \"fcf6e5ad-527f-422b-a88f-c91c66625546\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d" Mar 13 11:08:03.798223 master-0 kubenswrapper[33013]: I0313 11:08:03.797793 33013 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 13 11:08:03.798223 master-0 kubenswrapper[33013]: I0313 11:08:03.798036 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 13 11:08:03.803147 master-0 kubenswrapper[33013]: I0313 11:08:03.802896 33013 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 13 11:08:03.824721 master-0 kubenswrapper[33013]: I0313 11:08:03.819359 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-7t8nh"] Mar 13 11:08:03.824721 master-0 kubenswrapper[33013]: I0313 11:08:03.821162 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-7t8nh" Mar 13 11:08:03.824721 master-0 kubenswrapper[33013]: I0313 11:08:03.822976 33013 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 13 11:08:03.830730 master-0 kubenswrapper[33013]: I0313 11:08:03.830667 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-7t8nh"] Mar 13 11:08:03.869026 master-0 kubenswrapper[33013]: I0313 11:08:03.868951 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfxpt\" (UniqueName: \"kubernetes.io/projected/41cb7b03-d30d-489c-93d3-93ba92abd188-kube-api-access-zfxpt\") pod \"controller-7bb4cc7c98-7t8nh\" (UID: \"41cb7b03-d30d-489c-93d3-93ba92abd188\") " pod="metallb-system/controller-7bb4cc7c98-7t8nh" Mar 13 11:08:03.869026 master-0 kubenswrapper[33013]: I0313 11:08:03.869006 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-memberlist\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:03.869373 master-0 kubenswrapper[33013]: I0313 11:08:03.869062 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk648\" (UniqueName: \"kubernetes.io/projected/65e88938-e6c6-4e21-8088-6eddb31f58fc-kube-api-access-bk648\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:03.869373 master-0 kubenswrapper[33013]: I0313 11:08:03.869121 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/60249e0c-b4a7-4f2a-8271-f96ad477f42e-frr-conf\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.869373 master-0 kubenswrapper[33013]: I0313 11:08:03.869155 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/41cb7b03-d30d-489c-93d3-93ba92abd188-cert\") pod \"controller-7bb4cc7c98-7t8nh\" (UID: \"41cb7b03-d30d-489c-93d3-93ba92abd188\") " pod="metallb-system/controller-7bb4cc7c98-7t8nh" Mar 13 11:08:03.869373 master-0 kubenswrapper[33013]: I0313 11:08:03.869208 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/60249e0c-b4a7-4f2a-8271-f96ad477f42e-frr-startup\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.869373 master-0 kubenswrapper[33013]: I0313 11:08:03.869228 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/60249e0c-b4a7-4f2a-8271-f96ad477f42e-reloader\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.869373 master-0 kubenswrapper[33013]: I0313 11:08:03.869262 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60249e0c-b4a7-4f2a-8271-f96ad477f42e-metrics-certs\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.869373 master-0 kubenswrapper[33013]: I0313 11:08:03.869287 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-metrics-certs\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:03.869373 master-0 kubenswrapper[33013]: I0313 11:08:03.869315 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65e88938-e6c6-4e21-8088-6eddb31f58fc-metallb-excludel2\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:03.869373 master-0 kubenswrapper[33013]: I0313 11:08:03.869339 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jp4n\" (UniqueName: \"kubernetes.io/projected/60249e0c-b4a7-4f2a-8271-f96ad477f42e-kube-api-access-6jp4n\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.869373 master-0 kubenswrapper[33013]: I0313 11:08:03.869361 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41cb7b03-d30d-489c-93d3-93ba92abd188-metrics-certs\") pod \"controller-7bb4cc7c98-7t8nh\" (UID: \"41cb7b03-d30d-489c-93d3-93ba92abd188\") " pod="metallb-system/controller-7bb4cc7c98-7t8nh" Mar 13 11:08:03.869741 master-0 kubenswrapper[33013]: I0313 11:08:03.869389 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/60249e0c-b4a7-4f2a-8271-f96ad477f42e-frr-sockets\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.869741 master-0 kubenswrapper[33013]: I0313 11:08:03.869417 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/60249e0c-b4a7-4f2a-8271-f96ad477f42e-metrics\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.870104 master-0 kubenswrapper[33013]: I0313 11:08:03.870076 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/60249e0c-b4a7-4f2a-8271-f96ad477f42e-metrics\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.870164 master-0 kubenswrapper[33013]: I0313 11:08:03.870133 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/60249e0c-b4a7-4f2a-8271-f96ad477f42e-frr-sockets\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.870469 master-0 kubenswrapper[33013]: I0313 11:08:03.870427 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/60249e0c-b4a7-4f2a-8271-f96ad477f42e-reloader\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.870537 master-0 kubenswrapper[33013]: I0313 11:08:03.870440 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/60249e0c-b4a7-4f2a-8271-f96ad477f42e-frr-conf\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.871327 master-0 kubenswrapper[33013]: I0313 11:08:03.871245 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/60249e0c-b4a7-4f2a-8271-f96ad477f42e-frr-startup\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.874444 master-0 kubenswrapper[33013]: I0313 11:08:03.874399 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60249e0c-b4a7-4f2a-8271-f96ad477f42e-metrics-certs\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.897573 master-0 kubenswrapper[33013]: I0313 11:08:03.897527 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jp4n\" (UniqueName: \"kubernetes.io/projected/60249e0c-b4a7-4f2a-8271-f96ad477f42e-kube-api-access-6jp4n\") pod \"frr-k8s-z9dtp\" (UID: \"60249e0c-b4a7-4f2a-8271-f96ad477f42e\") " pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:03.971258 master-0 kubenswrapper[33013]: I0313 11:08:03.971179 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/41cb7b03-d30d-489c-93d3-93ba92abd188-cert\") pod \"controller-7bb4cc7c98-7t8nh\" (UID: \"41cb7b03-d30d-489c-93d3-93ba92abd188\") " pod="metallb-system/controller-7bb4cc7c98-7t8nh" Mar 13 11:08:03.971646 master-0 kubenswrapper[33013]: I0313 11:08:03.971632 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-metrics-certs\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:03.971765 master-0 kubenswrapper[33013]: E0313 11:08:03.971725 33013 secret.go:189] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 13 11:08:03.971826 master-0 kubenswrapper[33013]: E0313 11:08:03.971798 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-metrics-certs podName:65e88938-e6c6-4e21-8088-6eddb31f58fc nodeName:}" failed. No retries permitted until 2026-03-13 11:08:04.471783113 +0000 UTC m=+667.947736452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-metrics-certs") pod "speaker-tkcjf" (UID: "65e88938-e6c6-4e21-8088-6eddb31f58fc") : secret "speaker-certs-secret" not found Mar 13 11:08:03.971907 master-0 kubenswrapper[33013]: I0313 11:08:03.971888 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65e88938-e6c6-4e21-8088-6eddb31f58fc-metallb-excludel2\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:03.972031 master-0 kubenswrapper[33013]: I0313 11:08:03.972016 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41cb7b03-d30d-489c-93d3-93ba92abd188-metrics-certs\") pod \"controller-7bb4cc7c98-7t8nh\" (UID: \"41cb7b03-d30d-489c-93d3-93ba92abd188\") " pod="metallb-system/controller-7bb4cc7c98-7t8nh" Mar 13 11:08:03.972183 master-0 kubenswrapper[33013]: I0313 11:08:03.972153 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfxpt\" (UniqueName: \"kubernetes.io/projected/41cb7b03-d30d-489c-93d3-93ba92abd188-kube-api-access-zfxpt\") pod \"controller-7bb4cc7c98-7t8nh\" (UID: \"41cb7b03-d30d-489c-93d3-93ba92abd188\") " pod="metallb-system/controller-7bb4cc7c98-7t8nh" Mar 13 11:08:03.972277 master-0 kubenswrapper[33013]: I0313 11:08:03.972264 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-memberlist\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:03.972392 master-0 kubenswrapper[33013]: I0313 11:08:03.972379 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk648\" (UniqueName: \"kubernetes.io/projected/65e88938-e6c6-4e21-8088-6eddb31f58fc-kube-api-access-bk648\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:03.972627 master-0 kubenswrapper[33013]: I0313 11:08:03.972600 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65e88938-e6c6-4e21-8088-6eddb31f58fc-metallb-excludel2\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:03.972770 master-0 kubenswrapper[33013]: I0313 11:08:03.972725 33013 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 13 11:08:03.972848 master-0 kubenswrapper[33013]: E0313 11:08:03.972825 33013 secret.go:189] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 13 11:08:03.972888 master-0 kubenswrapper[33013]: E0313 11:08:03.972860 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41cb7b03-d30d-489c-93d3-93ba92abd188-metrics-certs podName:41cb7b03-d30d-489c-93d3-93ba92abd188 nodeName:}" failed. No retries permitted until 2026-03-13 11:08:04.472850023 +0000 UTC m=+667.948803372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41cb7b03-d30d-489c-93d3-93ba92abd188-metrics-certs") pod "controller-7bb4cc7c98-7t8nh" (UID: "41cb7b03-d30d-489c-93d3-93ba92abd188") : secret "controller-certs-secret" not found Mar 13 11:08:03.972929 master-0 kubenswrapper[33013]: E0313 11:08:03.972902 33013 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 13 11:08:03.972929 master-0 kubenswrapper[33013]: E0313 11:08:03.972924 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-memberlist podName:65e88938-e6c6-4e21-8088-6eddb31f58fc nodeName:}" failed. No retries permitted until 2026-03-13 11:08:04.472917385 +0000 UTC m=+667.948870734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-memberlist") pod "speaker-tkcjf" (UID: "65e88938-e6c6-4e21-8088-6eddb31f58fc") : secret "metallb-memberlist" not found Mar 13 11:08:03.987576 master-0 kubenswrapper[33013]: I0313 11:08:03.987530 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/41cb7b03-d30d-489c-93d3-93ba92abd188-cert\") pod \"controller-7bb4cc7c98-7t8nh\" (UID: \"41cb7b03-d30d-489c-93d3-93ba92abd188\") " pod="metallb-system/controller-7bb4cc7c98-7t8nh" Mar 13 11:08:03.993978 master-0 kubenswrapper[33013]: I0313 11:08:03.993948 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk648\" (UniqueName: \"kubernetes.io/projected/65e88938-e6c6-4e21-8088-6eddb31f58fc-kube-api-access-bk648\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:04.008245 master-0 kubenswrapper[33013]: I0313 11:08:03.997683 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfxpt\" (UniqueName: \"kubernetes.io/projected/41cb7b03-d30d-489c-93d3-93ba92abd188-kube-api-access-zfxpt\") pod \"controller-7bb4cc7c98-7t8nh\" (UID: \"41cb7b03-d30d-489c-93d3-93ba92abd188\") " pod="metallb-system/controller-7bb4cc7c98-7t8nh" Mar 13 11:08:04.041796 master-0 kubenswrapper[33013]: I0313 11:08:04.041172 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:04.279802 master-0 kubenswrapper[33013]: I0313 11:08:04.278301 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fcf6e5ad-527f-422b-a88f-c91c66625546-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-mmj7d\" (UID: \"fcf6e5ad-527f-422b-a88f-c91c66625546\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d" Mar 13 11:08:04.283235 master-0 kubenswrapper[33013]: I0313 11:08:04.283195 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fcf6e5ad-527f-422b-a88f-c91c66625546-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-mmj7d\" (UID: \"fcf6e5ad-527f-422b-a88f-c91c66625546\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d" Mar 13 11:08:04.318366 master-0 kubenswrapper[33013]: I0313 11:08:04.318279 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d" Mar 13 11:08:04.485364 master-0 kubenswrapper[33013]: I0313 11:08:04.485272 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-metrics-certs\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:04.485621 master-0 kubenswrapper[33013]: I0313 11:08:04.485401 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41cb7b03-d30d-489c-93d3-93ba92abd188-metrics-certs\") pod \"controller-7bb4cc7c98-7t8nh\" (UID: \"41cb7b03-d30d-489c-93d3-93ba92abd188\") " pod="metallb-system/controller-7bb4cc7c98-7t8nh" Mar 13 11:08:04.485672 master-0 kubenswrapper[33013]: I0313 11:08:04.485631 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-memberlist\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:04.486084 master-0 kubenswrapper[33013]: E0313 11:08:04.486004 33013 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 13 11:08:04.486396 master-0 kubenswrapper[33013]: E0313 11:08:04.486118 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-memberlist podName:65e88938-e6c6-4e21-8088-6eddb31f58fc nodeName:}" failed. No retries permitted until 2026-03-13 11:08:05.486087686 +0000 UTC m=+668.962041055 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-memberlist") pod "speaker-tkcjf" (UID: "65e88938-e6c6-4e21-8088-6eddb31f58fc") : secret "metallb-memberlist" not found Mar 13 11:08:04.490971 master-0 kubenswrapper[33013]: I0313 11:08:04.490890 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-metrics-certs\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:04.491443 master-0 kubenswrapper[33013]: I0313 11:08:04.491397 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41cb7b03-d30d-489c-93d3-93ba92abd188-metrics-certs\") pod \"controller-7bb4cc7c98-7t8nh\" (UID: \"41cb7b03-d30d-489c-93d3-93ba92abd188\") " pod="metallb-system/controller-7bb4cc7c98-7t8nh" Mar 13 11:08:04.560330 master-0 kubenswrapper[33013]: I0313 11:08:04.560248 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-7t8nh" Mar 13 11:08:04.597055 master-0 kubenswrapper[33013]: I0313 11:08:04.596419 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9dtp" event={"ID":"60249e0c-b4a7-4f2a-8271-f96ad477f42e","Type":"ContainerStarted","Data":"d02a7686851b30c7c9df15d29d2038a4b781e0b086eff45ec820df6d013dfd3c"} Mar 13 11:08:04.764162 master-0 kubenswrapper[33013]: W0313 11:08:04.764083 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcf6e5ad_527f_422b_a88f_c91c66625546.slice/crio-2ee2689e0cfc217b0c28ad6370c7d1e705e10e27562fe222aa3309852df7b7f5 WatchSource:0}: Error finding container 2ee2689e0cfc217b0c28ad6370c7d1e705e10e27562fe222aa3309852df7b7f5: Status 404 returned error can't find the container with id 2ee2689e0cfc217b0c28ad6370c7d1e705e10e27562fe222aa3309852df7b7f5 Mar 13 11:08:04.766644 master-0 kubenswrapper[33013]: I0313 11:08:04.765934 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d"] Mar 13 11:08:05.008682 master-0 kubenswrapper[33013]: I0313 11:08:05.002747 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-7t8nh"] Mar 13 11:08:05.008682 master-0 kubenswrapper[33013]: W0313 11:08:05.006374 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41cb7b03_d30d_489c_93d3_93ba92abd188.slice/crio-2d38283af2a54fe742baf508f518a884ed70152af94318456bef5c898a70baa9 WatchSource:0}: Error finding container 2d38283af2a54fe742baf508f518a884ed70152af94318456bef5c898a70baa9: Status 404 returned error can't find the container with id 2d38283af2a54fe742baf508f518a884ed70152af94318456bef5c898a70baa9 Mar 13 11:08:05.504009 master-0 kubenswrapper[33013]: I0313 11:08:05.503927 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-memberlist\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:05.507779 master-0 kubenswrapper[33013]: I0313 11:08:05.507664 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65e88938-e6c6-4e21-8088-6eddb31f58fc-memberlist\") pod \"speaker-tkcjf\" (UID: \"65e88938-e6c6-4e21-8088-6eddb31f58fc\") " pod="metallb-system/speaker-tkcjf" Mar 13 11:08:05.615510 master-0 kubenswrapper[33013]: I0313 11:08:05.615445 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-7t8nh" event={"ID":"41cb7b03-d30d-489c-93d3-93ba92abd188","Type":"ContainerStarted","Data":"063132279b8e66573b990455ae494491ecd9832698aae546c6ff5649f27e92ce"} Mar 13 11:08:05.615510 master-0 kubenswrapper[33013]: I0313 11:08:05.615513 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-7t8nh" event={"ID":"41cb7b03-d30d-489c-93d3-93ba92abd188","Type":"ContainerStarted","Data":"2d38283af2a54fe742baf508f518a884ed70152af94318456bef5c898a70baa9"} Mar 13 11:08:05.617580 master-0 kubenswrapper[33013]: I0313 11:08:05.617496 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d" event={"ID":"fcf6e5ad-527f-422b-a88f-c91c66625546","Type":"ContainerStarted","Data":"2ee2689e0cfc217b0c28ad6370c7d1e705e10e27562fe222aa3309852df7b7f5"} Mar 13 11:08:05.742417 master-0 kubenswrapper[33013]: I0313 11:08:05.742331 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tkcjf" Mar 13 11:08:06.089845 master-0 kubenswrapper[33013]: I0313 11:08:06.087313 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh"] Mar 13 11:08:06.091848 master-0 kubenswrapper[33013]: I0313 11:08:06.091808 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh" Mar 13 11:08:06.097660 master-0 kubenswrapper[33013]: I0313 11:08:06.096932 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 13 11:08:06.111710 master-0 kubenswrapper[33013]: I0313 11:08:06.109440 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-gnw4d"] Mar 13 11:08:06.111710 master-0 kubenswrapper[33013]: I0313 11:08:06.111164 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gnw4d" Mar 13 11:08:06.143925 master-0 kubenswrapper[33013]: I0313 11:08:06.143789 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59f7q\" (UniqueName: \"kubernetes.io/projected/c28ebe77-62c1-4a7c-af37-28b087b86bf5-kube-api-access-59f7q\") pod \"nmstate-metrics-9b8c8685d-gnw4d\" (UID: \"c28ebe77-62c1-4a7c-af37-28b087b86bf5\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gnw4d" Mar 13 11:08:06.152378 master-0 kubenswrapper[33013]: I0313 11:08:06.148164 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-44prf"] Mar 13 11:08:06.152378 master-0 kubenswrapper[33013]: I0313 11:08:06.149441 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:06.157192 master-0 kubenswrapper[33013]: I0313 11:08:06.157090 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-gnw4d"] Mar 13 11:08:06.164226 master-0 kubenswrapper[33013]: I0313 11:08:06.163374 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh"] Mar 13 11:08:06.258151 master-0 kubenswrapper[33013]: I0313 11:08:06.258088 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79g4l\" (UniqueName: \"kubernetes.io/projected/e8ac3bde-adc3-41f6-abaa-8bdb45a6e85b-kube-api-access-79g4l\") pod \"nmstate-webhook-5f558f5558-dz5sh\" (UID: \"e8ac3bde-adc3-41f6-abaa-8bdb45a6e85b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh" Mar 13 11:08:06.258151 master-0 kubenswrapper[33013]: I0313 11:08:06.258140 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cb8fa512-1ea8-41fe-b694-913f6e19c45b-ovs-socket\") pod \"nmstate-handler-44prf\" (UID: \"cb8fa512-1ea8-41fe-b694-913f6e19c45b\") " pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:06.258466 master-0 kubenswrapper[33013]: I0313 11:08:06.258184 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e8ac3bde-adc3-41f6-abaa-8bdb45a6e85b-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-dz5sh\" (UID: \"e8ac3bde-adc3-41f6-abaa-8bdb45a6e85b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh" Mar 13 11:08:06.258466 master-0 kubenswrapper[33013]: I0313 11:08:06.258220 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cb8fa512-1ea8-41fe-b694-913f6e19c45b-dbus-socket\") pod \"nmstate-handler-44prf\" (UID: \"cb8fa512-1ea8-41fe-b694-913f6e19c45b\") " pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:06.258466 master-0 kubenswrapper[33013]: I0313 11:08:06.258252 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59f7q\" (UniqueName: \"kubernetes.io/projected/c28ebe77-62c1-4a7c-af37-28b087b86bf5-kube-api-access-59f7q\") pod \"nmstate-metrics-9b8c8685d-gnw4d\" (UID: \"c28ebe77-62c1-4a7c-af37-28b087b86bf5\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gnw4d" Mar 13 11:08:06.258466 master-0 kubenswrapper[33013]: I0313 11:08:06.258272 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsbnc\" (UniqueName: \"kubernetes.io/projected/cb8fa512-1ea8-41fe-b694-913f6e19c45b-kube-api-access-xsbnc\") pod \"nmstate-handler-44prf\" (UID: \"cb8fa512-1ea8-41fe-b694-913f6e19c45b\") " pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:06.258466 master-0 kubenswrapper[33013]: I0313 11:08:06.258294 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cb8fa512-1ea8-41fe-b694-913f6e19c45b-nmstate-lock\") pod \"nmstate-handler-44prf\" (UID: \"cb8fa512-1ea8-41fe-b694-913f6e19c45b\") " pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:06.259733 master-0 kubenswrapper[33013]: I0313 11:08:06.259710 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w"] Mar 13 11:08:06.261304 master-0 kubenswrapper[33013]: I0313 11:08:06.260952 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" Mar 13 11:08:06.263079 master-0 kubenswrapper[33013]: I0313 11:08:06.263056 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 13 11:08:06.265556 master-0 kubenswrapper[33013]: I0313 11:08:06.265512 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 13 11:08:06.281921 master-0 kubenswrapper[33013]: I0313 11:08:06.281870 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59f7q\" (UniqueName: \"kubernetes.io/projected/c28ebe77-62c1-4a7c-af37-28b087b86bf5-kube-api-access-59f7q\") pod \"nmstate-metrics-9b8c8685d-gnw4d\" (UID: \"c28ebe77-62c1-4a7c-af37-28b087b86bf5\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gnw4d" Mar 13 11:08:06.285364 master-0 kubenswrapper[33013]: I0313 11:08:06.285321 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w"] Mar 13 11:08:06.361546 master-0 kubenswrapper[33013]: I0313 11:08:06.361488 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79g4l\" (UniqueName: \"kubernetes.io/projected/e8ac3bde-adc3-41f6-abaa-8bdb45a6e85b-kube-api-access-79g4l\") pod \"nmstate-webhook-5f558f5558-dz5sh\" (UID: \"e8ac3bde-adc3-41f6-abaa-8bdb45a6e85b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh" Mar 13 11:08:06.361546 master-0 kubenswrapper[33013]: I0313 11:08:06.361545 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cb8fa512-1ea8-41fe-b694-913f6e19c45b-ovs-socket\") pod \"nmstate-handler-44prf\" (UID: \"cb8fa512-1ea8-41fe-b694-913f6e19c45b\") " pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:06.361848 master-0 kubenswrapper[33013]: I0313 11:08:06.361604 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a4efdfa7-d7da-4e65-a31b-305a653a5a0c-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-qtp7w\" (UID: \"a4efdfa7-d7da-4e65-a31b-305a653a5a0c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" Mar 13 11:08:06.361848 master-0 kubenswrapper[33013]: I0313 11:08:06.361636 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nkf6\" (UniqueName: \"kubernetes.io/projected/a4efdfa7-d7da-4e65-a31b-305a653a5a0c-kube-api-access-2nkf6\") pod \"nmstate-console-plugin-86f58fcf4-qtp7w\" (UID: \"a4efdfa7-d7da-4e65-a31b-305a653a5a0c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" Mar 13 11:08:06.361913 master-0 kubenswrapper[33013]: I0313 11:08:06.361772 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cb8fa512-1ea8-41fe-b694-913f6e19c45b-ovs-socket\") pod \"nmstate-handler-44prf\" (UID: \"cb8fa512-1ea8-41fe-b694-913f6e19c45b\") " pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:06.362002 master-0 kubenswrapper[33013]: I0313 11:08:06.361939 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e8ac3bde-adc3-41f6-abaa-8bdb45a6e85b-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-dz5sh\" (UID: \"e8ac3bde-adc3-41f6-abaa-8bdb45a6e85b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh" Mar 13 11:08:06.362136 master-0 kubenswrapper[33013]: I0313 11:08:06.362084 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cb8fa512-1ea8-41fe-b694-913f6e19c45b-dbus-socket\") pod \"nmstate-handler-44prf\" (UID: \"cb8fa512-1ea8-41fe-b694-913f6e19c45b\") " pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:06.362191 master-0 kubenswrapper[33013]: I0313 11:08:06.362141 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsbnc\" (UniqueName: \"kubernetes.io/projected/cb8fa512-1ea8-41fe-b694-913f6e19c45b-kube-api-access-xsbnc\") pod \"nmstate-handler-44prf\" (UID: \"cb8fa512-1ea8-41fe-b694-913f6e19c45b\") " pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:06.362340 master-0 kubenswrapper[33013]: I0313 11:08:06.362308 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cb8fa512-1ea8-41fe-b694-913f6e19c45b-nmstate-lock\") pod \"nmstate-handler-44prf\" (UID: \"cb8fa512-1ea8-41fe-b694-913f6e19c45b\") " pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:06.362526 master-0 kubenswrapper[33013]: I0313 11:08:06.362494 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a4efdfa7-d7da-4e65-a31b-305a653a5a0c-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-qtp7w\" (UID: \"a4efdfa7-d7da-4e65-a31b-305a653a5a0c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" Mar 13 11:08:06.366611 master-0 kubenswrapper[33013]: I0313 11:08:06.363521 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cb8fa512-1ea8-41fe-b694-913f6e19c45b-dbus-socket\") pod \"nmstate-handler-44prf\" (UID: \"cb8fa512-1ea8-41fe-b694-913f6e19c45b\") " pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:06.366611 master-0 kubenswrapper[33013]: I0313 11:08:06.363603 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cb8fa512-1ea8-41fe-b694-913f6e19c45b-nmstate-lock\") pod \"nmstate-handler-44prf\" (UID: \"cb8fa512-1ea8-41fe-b694-913f6e19c45b\") " pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:06.371004 master-0 kubenswrapper[33013]: I0313 11:08:06.370928 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e8ac3bde-adc3-41f6-abaa-8bdb45a6e85b-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-dz5sh\" (UID: \"e8ac3bde-adc3-41f6-abaa-8bdb45a6e85b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh" Mar 13 11:08:06.388307 master-0 kubenswrapper[33013]: I0313 11:08:06.386173 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79g4l\" (UniqueName: \"kubernetes.io/projected/e8ac3bde-adc3-41f6-abaa-8bdb45a6e85b-kube-api-access-79g4l\") pod \"nmstate-webhook-5f558f5558-dz5sh\" (UID: \"e8ac3bde-adc3-41f6-abaa-8bdb45a6e85b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh" Mar 13 11:08:06.399415 master-0 kubenswrapper[33013]: I0313 11:08:06.399366 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsbnc\" (UniqueName: \"kubernetes.io/projected/cb8fa512-1ea8-41fe-b694-913f6e19c45b-kube-api-access-xsbnc\") pod \"nmstate-handler-44prf\" (UID: \"cb8fa512-1ea8-41fe-b694-913f6e19c45b\") " pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:06.466686 master-0 kubenswrapper[33013]: I0313 11:08:06.464909 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a4efdfa7-d7da-4e65-a31b-305a653a5a0c-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-qtp7w\" (UID: \"a4efdfa7-d7da-4e65-a31b-305a653a5a0c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" Mar 13 11:08:06.466686 master-0 kubenswrapper[33013]: I0313 11:08:06.464972 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nkf6\" (UniqueName: \"kubernetes.io/projected/a4efdfa7-d7da-4e65-a31b-305a653a5a0c-kube-api-access-2nkf6\") pod \"nmstate-console-plugin-86f58fcf4-qtp7w\" (UID: \"a4efdfa7-d7da-4e65-a31b-305a653a5a0c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" Mar 13 11:08:06.466686 master-0 kubenswrapper[33013]: I0313 11:08:06.465051 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a4efdfa7-d7da-4e65-a31b-305a653a5a0c-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-qtp7w\" (UID: \"a4efdfa7-d7da-4e65-a31b-305a653a5a0c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" Mar 13 11:08:06.466686 master-0 kubenswrapper[33013]: I0313 11:08:06.465206 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh" Mar 13 11:08:06.466686 master-0 kubenswrapper[33013]: E0313 11:08:06.465764 33013 secret.go:189] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 13 11:08:06.466686 master-0 kubenswrapper[33013]: E0313 11:08:06.465805 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4efdfa7-d7da-4e65-a31b-305a653a5a0c-plugin-serving-cert podName:a4efdfa7-d7da-4e65-a31b-305a653a5a0c nodeName:}" failed. No retries permitted until 2026-03-13 11:08:06.965789459 +0000 UTC m=+670.441742808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a4efdfa7-d7da-4e65-a31b-305a653a5a0c-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-qtp7w" (UID: "a4efdfa7-d7da-4e65-a31b-305a653a5a0c") : secret "plugin-serving-cert" not found Mar 13 11:08:06.467734 master-0 kubenswrapper[33013]: I0313 11:08:06.467576 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f95cbfbff-nnc22"] Mar 13 11:08:06.473271 master-0 kubenswrapper[33013]: I0313 11:08:06.468255 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a4efdfa7-d7da-4e65-a31b-305a653a5a0c-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-qtp7w\" (UID: \"a4efdfa7-d7da-4e65-a31b-305a653a5a0c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" Mar 13 11:08:06.473271 master-0 kubenswrapper[33013]: I0313 11:08:06.471006 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.483163 master-0 kubenswrapper[33013]: I0313 11:08:06.481737 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gnw4d" Mar 13 11:08:06.503980 master-0 kubenswrapper[33013]: I0313 11:08:06.502215 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f95cbfbff-nnc22"] Mar 13 11:08:06.503980 master-0 kubenswrapper[33013]: I0313 11:08:06.502782 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:06.509731 master-0 kubenswrapper[33013]: I0313 11:08:06.509683 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nkf6\" (UniqueName: \"kubernetes.io/projected/a4efdfa7-d7da-4e65-a31b-305a653a5a0c-kube-api-access-2nkf6\") pod \"nmstate-console-plugin-86f58fcf4-qtp7w\" (UID: \"a4efdfa7-d7da-4e65-a31b-305a653a5a0c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" Mar 13 11:08:06.569782 master-0 kubenswrapper[33013]: I0313 11:08:06.566875 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50258969-f5f1-4d76-987a-efeee2c541a9-trusted-ca-bundle\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.569782 master-0 kubenswrapper[33013]: I0313 11:08:06.566931 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50258969-f5f1-4d76-987a-efeee2c541a9-console-serving-cert\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.569782 master-0 kubenswrapper[33013]: I0313 11:08:06.566971 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50258969-f5f1-4d76-987a-efeee2c541a9-console-oauth-config\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.569782 master-0 kubenswrapper[33013]: I0313 11:08:06.567019 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50258969-f5f1-4d76-987a-efeee2c541a9-oauth-serving-cert\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.569782 master-0 kubenswrapper[33013]: I0313 11:08:06.567073 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50258969-f5f1-4d76-987a-efeee2c541a9-console-config\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.569782 master-0 kubenswrapper[33013]: I0313 11:08:06.567154 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50258969-f5f1-4d76-987a-efeee2c541a9-service-ca\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.569782 master-0 kubenswrapper[33013]: I0313 11:08:06.567192 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d74xd\" (UniqueName: \"kubernetes.io/projected/50258969-f5f1-4d76-987a-efeee2c541a9-kube-api-access-d74xd\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.656315 master-0 kubenswrapper[33013]: I0313 11:08:06.655935 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-44prf" event={"ID":"cb8fa512-1ea8-41fe-b694-913f6e19c45b","Type":"ContainerStarted","Data":"9842ab738cb9701dbc4d88066ed2dc91a6a37c6cb9fc935b44a708063d0cf9dc"} Mar 13 11:08:06.665162 master-0 kubenswrapper[33013]: I0313 11:08:06.665113 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tkcjf" event={"ID":"65e88938-e6c6-4e21-8088-6eddb31f58fc","Type":"ContainerStarted","Data":"2e4e5758b5cc81467f0e1d47a18e1c3c650216e7df5f77b2c473a78673005be7"} Mar 13 11:08:06.665390 master-0 kubenswrapper[33013]: I0313 11:08:06.665367 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tkcjf" event={"ID":"65e88938-e6c6-4e21-8088-6eddb31f58fc","Type":"ContainerStarted","Data":"020c99caf745a35a52f8bd43c5d2d840ed233c3335dc1f6d5a86fa64152d0520"} Mar 13 11:08:06.671730 master-0 kubenswrapper[33013]: I0313 11:08:06.671638 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50258969-f5f1-4d76-987a-efeee2c541a9-service-ca\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.671876 master-0 kubenswrapper[33013]: I0313 11:08:06.671737 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d74xd\" (UniqueName: \"kubernetes.io/projected/50258969-f5f1-4d76-987a-efeee2c541a9-kube-api-access-d74xd\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.671876 master-0 kubenswrapper[33013]: I0313 11:08:06.671808 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50258969-f5f1-4d76-987a-efeee2c541a9-trusted-ca-bundle\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.671975 master-0 kubenswrapper[33013]: I0313 11:08:06.671840 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50258969-f5f1-4d76-987a-efeee2c541a9-console-serving-cert\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.671975 master-0 kubenswrapper[33013]: I0313 11:08:06.671959 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50258969-f5f1-4d76-987a-efeee2c541a9-console-oauth-config\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.672175 master-0 kubenswrapper[33013]: I0313 11:08:06.672027 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50258969-f5f1-4d76-987a-efeee2c541a9-oauth-serving-cert\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.685428 master-0 kubenswrapper[33013]: I0313 11:08:06.672145 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50258969-f5f1-4d76-987a-efeee2c541a9-console-config\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.687142 master-0 kubenswrapper[33013]: I0313 11:08:06.687106 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50258969-f5f1-4d76-987a-efeee2c541a9-console-oauth-config\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.687628 master-0 kubenswrapper[33013]: I0313 11:08:06.687543 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50258969-f5f1-4d76-987a-efeee2c541a9-service-ca\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.689013 master-0 kubenswrapper[33013]: I0313 11:08:06.688976 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50258969-f5f1-4d76-987a-efeee2c541a9-oauth-serving-cert\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.689445 master-0 kubenswrapper[33013]: I0313 11:08:06.689377 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50258969-f5f1-4d76-987a-efeee2c541a9-console-config\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.694605 master-0 kubenswrapper[33013]: I0313 11:08:06.694543 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50258969-f5f1-4d76-987a-efeee2c541a9-console-serving-cert\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.718812 master-0 kubenswrapper[33013]: I0313 11:08:06.714157 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50258969-f5f1-4d76-987a-efeee2c541a9-trusted-ca-bundle\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.756444 master-0 kubenswrapper[33013]: I0313 11:08:06.746050 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d74xd\" (UniqueName: \"kubernetes.io/projected/50258969-f5f1-4d76-987a-efeee2c541a9-kube-api-access-d74xd\") pod \"console-f95cbfbff-nnc22\" (UID: \"50258969-f5f1-4d76-987a-efeee2c541a9\") " pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:06.816818 master-0 kubenswrapper[33013]: I0313 11:08:06.816409 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:07.037333 master-0 kubenswrapper[33013]: I0313 11:08:07.036270 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a4efdfa7-d7da-4e65-a31b-305a653a5a0c-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-qtp7w\" (UID: \"a4efdfa7-d7da-4e65-a31b-305a653a5a0c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" Mar 13 11:08:07.043604 master-0 kubenswrapper[33013]: I0313 11:08:07.043344 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a4efdfa7-d7da-4e65-a31b-305a653a5a0c-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-qtp7w\" (UID: \"a4efdfa7-d7da-4e65-a31b-305a653a5a0c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" Mar 13 11:08:07.094001 master-0 kubenswrapper[33013]: I0313 11:08:07.093743 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-gnw4d"] Mar 13 11:08:07.094001 master-0 kubenswrapper[33013]: W0313 11:08:07.093821 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28ebe77_62c1_4a7c_af37_28b087b86bf5.slice/crio-668dba094bdfd699778eadf317fbe3100eab7f4fd7833bc2d39cf30b2249d5f4 WatchSource:0}: Error finding container 668dba094bdfd699778eadf317fbe3100eab7f4fd7833bc2d39cf30b2249d5f4: Status 404 returned error can't find the container with id 668dba094bdfd699778eadf317fbe3100eab7f4fd7833bc2d39cf30b2249d5f4 Mar 13 11:08:07.158506 master-0 kubenswrapper[33013]: I0313 11:08:07.158450 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh"] Mar 13 11:08:07.210243 master-0 kubenswrapper[33013]: I0313 11:08:07.210025 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" Mar 13 11:08:07.404747 master-0 kubenswrapper[33013]: I0313 11:08:07.404686 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f95cbfbff-nnc22"] Mar 13 11:08:07.423677 master-0 kubenswrapper[33013]: W0313 11:08:07.423609 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50258969_f5f1_4d76_987a_efeee2c541a9.slice/crio-5c1def5fe2e826353fa9e0ca9ea1513bde88bd97239ed4c3d07ff3aad59ea068 WatchSource:0}: Error finding container 5c1def5fe2e826353fa9e0ca9ea1513bde88bd97239ed4c3d07ff3aad59ea068: Status 404 returned error can't find the container with id 5c1def5fe2e826353fa9e0ca9ea1513bde88bd97239ed4c3d07ff3aad59ea068 Mar 13 11:08:07.678848 master-0 kubenswrapper[33013]: I0313 11:08:07.678309 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gnw4d" event={"ID":"c28ebe77-62c1-4a7c-af37-28b087b86bf5","Type":"ContainerStarted","Data":"668dba094bdfd699778eadf317fbe3100eab7f4fd7833bc2d39cf30b2249d5f4"} Mar 13 11:08:07.683789 master-0 kubenswrapper[33013]: I0313 11:08:07.683652 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f95cbfbff-nnc22" event={"ID":"50258969-f5f1-4d76-987a-efeee2c541a9","Type":"ContainerStarted","Data":"7775536a1a9d4acf0f19741aec76e9cfbb731e711e35eb355c5687df5a8822ce"} Mar 13 11:08:07.683789 master-0 kubenswrapper[33013]: I0313 11:08:07.683778 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f95cbfbff-nnc22" event={"ID":"50258969-f5f1-4d76-987a-efeee2c541a9","Type":"ContainerStarted","Data":"5c1def5fe2e826353fa9e0ca9ea1513bde88bd97239ed4c3d07ff3aad59ea068"} Mar 13 11:08:07.686073 master-0 kubenswrapper[33013]: I0313 11:08:07.686032 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh" event={"ID":"e8ac3bde-adc3-41f6-abaa-8bdb45a6e85b","Type":"ContainerStarted","Data":"012450d30070c04afc7ec5bec7cfb3f7c4e76e6e7ff85b50fa2b0df3900509b1"} Mar 13 11:08:07.690475 master-0 kubenswrapper[33013]: I0313 11:08:07.689515 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w"] Mar 13 11:08:07.698026 master-0 kubenswrapper[33013]: W0313 11:08:07.697971 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4efdfa7_d7da_4e65_a31b_305a653a5a0c.slice/crio-5a0e3ddb9c5c0f98a07faa476cd225738050d579e3f746c65d9ba92cfdec7e97 WatchSource:0}: Error finding container 5a0e3ddb9c5c0f98a07faa476cd225738050d579e3f746c65d9ba92cfdec7e97: Status 404 returned error can't find the container with id 5a0e3ddb9c5c0f98a07faa476cd225738050d579e3f746c65d9ba92cfdec7e97 Mar 13 11:08:07.713498 master-0 kubenswrapper[33013]: I0313 11:08:07.713374 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f95cbfbff-nnc22" podStartSLOduration=1.7133572830000001 podStartE2EDuration="1.713357283s" podCreationTimestamp="2026-03-13 11:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:08:07.712670084 +0000 UTC m=+671.188623433" watchObservedRunningTime="2026-03-13 11:08:07.713357283 +0000 UTC m=+671.189310632" Mar 13 11:08:08.696841 master-0 kubenswrapper[33013]: I0313 11:08:08.696745 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" event={"ID":"a4efdfa7-d7da-4e65-a31b-305a653a5a0c","Type":"ContainerStarted","Data":"5a0e3ddb9c5c0f98a07faa476cd225738050d579e3f746c65d9ba92cfdec7e97"} Mar 13 11:08:13.767134 master-0 kubenswrapper[33013]: I0313 11:08:13.767063 33013 generic.go:334] "Generic (PLEG): container finished" podID="60249e0c-b4a7-4f2a-8271-f96ad477f42e" containerID="126ac8e4140aa3d449365bcfcee4a30c1319cd0f6bb9c2dcf5f9e18d1548d5b4" exitCode=0 Mar 13 11:08:13.767896 master-0 kubenswrapper[33013]: I0313 11:08:13.767166 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9dtp" event={"ID":"60249e0c-b4a7-4f2a-8271-f96ad477f42e","Type":"ContainerDied","Data":"126ac8e4140aa3d449365bcfcee4a30c1319cd0f6bb9c2dcf5f9e18d1548d5b4"} Mar 13 11:08:13.771706 master-0 kubenswrapper[33013]: I0313 11:08:13.771653 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gnw4d" event={"ID":"c28ebe77-62c1-4a7c-af37-28b087b86bf5","Type":"ContainerStarted","Data":"56694bccaf97e8f2a2e27f04b396bab66a0e8494974ce3fc87999b11b02eb381"} Mar 13 11:08:13.771794 master-0 kubenswrapper[33013]: I0313 11:08:13.771715 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gnw4d" event={"ID":"c28ebe77-62c1-4a7c-af37-28b087b86bf5","Type":"ContainerStarted","Data":"7f29f48726bd52919a7638fc72a046e014e91082b1c08380d2186bfaa1a71c23"} Mar 13 11:08:13.778882 master-0 kubenswrapper[33013]: I0313 11:08:13.778827 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh" event={"ID":"e8ac3bde-adc3-41f6-abaa-8bdb45a6e85b","Type":"ContainerStarted","Data":"4ab9b8d4e034c56d184f6f25c9af05891d1a20bfb4e0c0ff2e7849857a7af290"} Mar 13 11:08:13.779750 master-0 kubenswrapper[33013]: I0313 11:08:13.779528 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh" Mar 13 11:08:13.817046 master-0 kubenswrapper[33013]: I0313 11:08:13.814168 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-7t8nh" event={"ID":"41cb7b03-d30d-489c-93d3-93ba92abd188","Type":"ContainerStarted","Data":"b7f565d4e5e9a269805394c052f9c59e9372ab85d6bc2cfb5d41e3fc2c442dd9"} Mar 13 11:08:13.817046 master-0 kubenswrapper[33013]: I0313 11:08:13.814477 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-7t8nh" Mar 13 11:08:13.831710 master-0 kubenswrapper[33013]: I0313 11:08:13.830092 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" event={"ID":"a4efdfa7-d7da-4e65-a31b-305a653a5a0c","Type":"ContainerStarted","Data":"662abe3b27bbe9a7ebb738a9e07871837e101b0e859ef64f5478c5875d751535"} Mar 13 11:08:13.884188 master-0 kubenswrapper[33013]: I0313 11:08:13.875661 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gnw4d" podStartSLOduration=1.957860255 podStartE2EDuration="7.875630478s" podCreationTimestamp="2026-03-13 11:08:06 +0000 UTC" firstStartedPulling="2026-03-13 11:08:07.130773169 +0000 UTC m=+670.606726518" lastFinishedPulling="2026-03-13 11:08:13.048543392 +0000 UTC m=+676.524496741" observedRunningTime="2026-03-13 11:08:13.832910042 +0000 UTC m=+677.308863421" watchObservedRunningTime="2026-03-13 11:08:13.875630478 +0000 UTC m=+677.351583837" Mar 13 11:08:13.886094 master-0 kubenswrapper[33013]: I0313 11:08:13.886047 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-44prf" event={"ID":"cb8fa512-1ea8-41fe-b694-913f6e19c45b","Type":"ContainerStarted","Data":"0a61bb985e9b7fd232dfeca5f695dae851f27b11e5d09ce23406b9f79172f6eb"} Mar 13 11:08:13.886539 master-0 kubenswrapper[33013]: I0313 11:08:13.886508 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:13.890451 master-0 kubenswrapper[33013]: I0313 11:08:13.890378 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh" podStartSLOduration=1.988844402 podStartE2EDuration="7.89035387s" podCreationTimestamp="2026-03-13 11:08:06 +0000 UTC" firstStartedPulling="2026-03-13 11:08:07.144933445 +0000 UTC m=+670.620886794" lastFinishedPulling="2026-03-13 11:08:13.046442923 +0000 UTC m=+676.522396262" observedRunningTime="2026-03-13 11:08:13.885926946 +0000 UTC m=+677.361880305" watchObservedRunningTime="2026-03-13 11:08:13.89035387 +0000 UTC m=+677.366307219" Mar 13 11:08:13.919125 master-0 kubenswrapper[33013]: I0313 11:08:13.918216 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tkcjf" event={"ID":"65e88938-e6c6-4e21-8088-6eddb31f58fc","Type":"ContainerStarted","Data":"94f53b774bc286bd223eb1cf30778601a25d87d2a364e0b054dc1e251b87817a"} Mar 13 11:08:13.919125 master-0 kubenswrapper[33013]: I0313 11:08:13.919053 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tkcjf" Mar 13 11:08:13.921342 master-0 kubenswrapper[33013]: I0313 11:08:13.921305 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-qtp7w" podStartSLOduration=2.5778603970000002 podStartE2EDuration="7.921291506s" podCreationTimestamp="2026-03-13 11:08:06 +0000 UTC" firstStartedPulling="2026-03-13 11:08:07.70250447 +0000 UTC m=+671.178457819" lastFinishedPulling="2026-03-13 11:08:13.045935579 +0000 UTC m=+676.521888928" observedRunningTime="2026-03-13 11:08:13.91749816 +0000 UTC m=+677.393451509" watchObservedRunningTime="2026-03-13 11:08:13.921291506 +0000 UTC m=+677.397244855" Mar 13 11:08:13.922910 master-0 kubenswrapper[33013]: I0313 11:08:13.922858 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d" event={"ID":"fcf6e5ad-527f-422b-a88f-c91c66625546","Type":"ContainerStarted","Data":"a8aa488590755793a42c7cfcc487532d09cb36c0a413106d520aec393975823e"} Mar 13 11:08:13.923991 master-0 kubenswrapper[33013]: I0313 11:08:13.923960 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d" Mar 13 11:08:13.948492 master-0 kubenswrapper[33013]: I0313 11:08:13.948384 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-7t8nh" podStartSLOduration=3.222441099 podStartE2EDuration="10.948349153s" podCreationTimestamp="2026-03-13 11:08:03 +0000 UTC" firstStartedPulling="2026-03-13 11:08:05.160380547 +0000 UTC m=+668.636333896" lastFinishedPulling="2026-03-13 11:08:12.886288601 +0000 UTC m=+676.362241950" observedRunningTime="2026-03-13 11:08:13.93967859 +0000 UTC m=+677.415631959" watchObservedRunningTime="2026-03-13 11:08:13.948349153 +0000 UTC m=+677.424302502" Mar 13 11:08:13.990994 master-0 kubenswrapper[33013]: I0313 11:08:13.990919 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d" podStartSLOduration=2.7112929340000003 podStartE2EDuration="10.990896844s" podCreationTimestamp="2026-03-13 11:08:03 +0000 UTC" firstStartedPulling="2026-03-13 11:08:04.767292466 +0000 UTC m=+668.243245815" lastFinishedPulling="2026-03-13 11:08:13.046896376 +0000 UTC m=+676.522849725" observedRunningTime="2026-03-13 11:08:13.977865029 +0000 UTC m=+677.453818398" watchObservedRunningTime="2026-03-13 11:08:13.990896844 +0000 UTC m=+677.466850193" Mar 13 11:08:14.021437 master-0 kubenswrapper[33013]: I0313 11:08:14.021286 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-44prf" podStartSLOduration=1.711432168 podStartE2EDuration="8.021256363s" podCreationTimestamp="2026-03-13 11:08:06 +0000 UTC" firstStartedPulling="2026-03-13 11:08:06.604211493 +0000 UTC m=+670.080164842" lastFinishedPulling="2026-03-13 11:08:12.914035688 +0000 UTC m=+676.389989037" observedRunningTime="2026-03-13 11:08:14.009511445 +0000 UTC m=+677.485464794" watchObservedRunningTime="2026-03-13 11:08:14.021256363 +0000 UTC m=+677.497209742" Mar 13 11:08:14.040215 master-0 kubenswrapper[33013]: I0313 11:08:14.040091 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-tkcjf" podStartSLOduration=4.375173729 podStartE2EDuration="11.04005575s" podCreationTimestamp="2026-03-13 11:08:03 +0000 UTC" firstStartedPulling="2026-03-13 11:08:06.22140504 +0000 UTC m=+669.697358389" lastFinishedPulling="2026-03-13 11:08:12.886287061 +0000 UTC m=+676.362240410" observedRunningTime="2026-03-13 11:08:14.029074772 +0000 UTC m=+677.505028131" watchObservedRunningTime="2026-03-13 11:08:14.04005575 +0000 UTC m=+677.516009099" Mar 13 11:08:14.935205 master-0 kubenswrapper[33013]: I0313 11:08:14.935125 33013 generic.go:334] "Generic (PLEG): container finished" podID="60249e0c-b4a7-4f2a-8271-f96ad477f42e" containerID="af8bee4859b11d3cbec10982b601001de41501d1d7101000a189eee4933916de" exitCode=0 Mar 13 11:08:14.935895 master-0 kubenswrapper[33013]: I0313 11:08:14.935211 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9dtp" event={"ID":"60249e0c-b4a7-4f2a-8271-f96ad477f42e","Type":"ContainerDied","Data":"af8bee4859b11d3cbec10982b601001de41501d1d7101000a189eee4933916de"} Mar 13 11:08:15.949991 master-0 kubenswrapper[33013]: I0313 11:08:15.949932 33013 generic.go:334] "Generic (PLEG): container finished" podID="60249e0c-b4a7-4f2a-8271-f96ad477f42e" containerID="0b4b511b51cc8d5e3a618c3c86acbee7055a3436fac3f8e552f46e4f8554162e" exitCode=0 Mar 13 11:08:15.950791 master-0 kubenswrapper[33013]: I0313 11:08:15.950023 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9dtp" event={"ID":"60249e0c-b4a7-4f2a-8271-f96ad477f42e","Type":"ContainerDied","Data":"0b4b511b51cc8d5e3a618c3c86acbee7055a3436fac3f8e552f46e4f8554162e"} Mar 13 11:08:16.820727 master-0 kubenswrapper[33013]: I0313 11:08:16.820518 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:16.820727 master-0 kubenswrapper[33013]: I0313 11:08:16.820680 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:16.828738 master-0 kubenswrapper[33013]: I0313 11:08:16.826932 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:16.961854 master-0 kubenswrapper[33013]: I0313 11:08:16.961773 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9dtp" event={"ID":"60249e0c-b4a7-4f2a-8271-f96ad477f42e","Type":"ContainerStarted","Data":"8813edb382b338f6aabd986847ed78468a54c30e67693f09621d04f8546d6a40"} Mar 13 11:08:16.961854 master-0 kubenswrapper[33013]: I0313 11:08:16.961842 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9dtp" event={"ID":"60249e0c-b4a7-4f2a-8271-f96ad477f42e","Type":"ContainerStarted","Data":"fc73df7176058c9efc489176860bea84d9bc5caf04f4513eb1a431b4b709bb97"} Mar 13 11:08:16.961854 master-0 kubenswrapper[33013]: I0313 11:08:16.961854 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9dtp" event={"ID":"60249e0c-b4a7-4f2a-8271-f96ad477f42e","Type":"ContainerStarted","Data":"a9431d3f961868ea6101196a703261197080bcb6e12d11e990ae0e87e4d6d745"} Mar 13 11:08:16.963248 master-0 kubenswrapper[33013]: I0313 11:08:16.961889 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9dtp" event={"ID":"60249e0c-b4a7-4f2a-8271-f96ad477f42e","Type":"ContainerStarted","Data":"2ddb251839a7ef55230ff77bca60611a18c7bf428fb65a1328ab5281fd5eece7"} Mar 13 11:08:16.965818 master-0 kubenswrapper[33013]: I0313 11:08:16.965794 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f95cbfbff-nnc22" Mar 13 11:08:17.055612 master-0 kubenswrapper[33013]: I0313 11:08:17.055375 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-75bbf545c6-v5b28"] Mar 13 11:08:17.977063 master-0 kubenswrapper[33013]: I0313 11:08:17.976987 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9dtp" event={"ID":"60249e0c-b4a7-4f2a-8271-f96ad477f42e","Type":"ContainerStarted","Data":"a75d6b86d9b596fa22ecf953528e92dbb28e6a3e0b79bf86d3dba7a78d364fe6"} Mar 13 11:08:17.977063 master-0 kubenswrapper[33013]: I0313 11:08:17.977050 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9dtp" event={"ID":"60249e0c-b4a7-4f2a-8271-f96ad477f42e","Type":"ContainerStarted","Data":"2b7f4a40e95ca105d819f3ba041765e65dc1085242c332f2cc3ec7ebb6bc2141"} Mar 13 11:08:17.977681 master-0 kubenswrapper[33013]: I0313 11:08:17.977340 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:19.042176 master-0 kubenswrapper[33013]: I0313 11:08:19.042128 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:19.082458 master-0 kubenswrapper[33013]: I0313 11:08:19.082405 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:19.114473 master-0 kubenswrapper[33013]: I0313 11:08:19.114361 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-z9dtp" podStartSLOduration=7.385773521 podStartE2EDuration="16.114331696s" podCreationTimestamp="2026-03-13 11:08:03 +0000 UTC" firstStartedPulling="2026-03-13 11:08:04.185871004 +0000 UTC m=+667.661824353" lastFinishedPulling="2026-03-13 11:08:12.914429169 +0000 UTC m=+676.390382528" observedRunningTime="2026-03-13 11:08:18.004838356 +0000 UTC m=+681.480791715" watchObservedRunningTime="2026-03-13 11:08:19.114331696 +0000 UTC m=+682.590285035" Mar 13 11:08:21.525576 master-0 kubenswrapper[33013]: I0313 11:08:21.525510 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-44prf" Mar 13 11:08:24.324248 master-0 kubenswrapper[33013]: I0313 11:08:24.324167 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-mmj7d" Mar 13 11:08:24.565627 master-0 kubenswrapper[33013]: I0313 11:08:24.565394 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-7t8nh" Mar 13 11:08:25.746801 master-0 kubenswrapper[33013]: I0313 11:08:25.746567 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tkcjf" Mar 13 11:08:26.471464 master-0 kubenswrapper[33013]: I0313 11:08:26.471359 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-dz5sh" Mar 13 11:08:31.528492 master-0 kubenswrapper[33013]: I0313 11:08:31.528408 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-8cz6q"] Mar 13 11:08:31.529627 master-0 kubenswrapper[33013]: I0313 11:08:31.529602 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.532904 master-0 kubenswrapper[33013]: I0313 11:08:31.532857 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 13 11:08:31.550942 master-0 kubenswrapper[33013]: I0313 11:08:31.550882 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-8cz6q"] Mar 13 11:08:31.697696 master-0 kubenswrapper[33013]: I0313 11:08:31.697625 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-sys\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.697696 master-0 kubenswrapper[33013]: I0313 11:08:31.697682 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-node-plugin-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.697997 master-0 kubenswrapper[33013]: I0313 11:08:31.697716 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/afa79dc1-0a26-418c-a280-10ed988f4f40-metrics-cert\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.697997 master-0 kubenswrapper[33013]: I0313 11:08:31.697787 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpt95\" (UniqueName: \"kubernetes.io/projected/afa79dc1-0a26-418c-a280-10ed988f4f40-kube-api-access-zpt95\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.697997 master-0 kubenswrapper[33013]: I0313 11:08:31.697958 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-csi-plugin-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.698141 master-0 kubenswrapper[33013]: I0313 11:08:31.698104 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-lvmd-config\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.698193 master-0 kubenswrapper[33013]: I0313 11:08:31.698161 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-device-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.698267 master-0 kubenswrapper[33013]: I0313 11:08:31.698240 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-run-udev\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.698409 master-0 kubenswrapper[33013]: I0313 11:08:31.698380 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-file-lock-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.698561 master-0 kubenswrapper[33013]: I0313 11:08:31.698545 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-registration-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.698703 master-0 kubenswrapper[33013]: I0313 11:08:31.698686 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-pod-volumes-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.800441 master-0 kubenswrapper[33013]: I0313 11:08:31.800396 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-run-udev\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.800736 master-0 kubenswrapper[33013]: I0313 11:08:31.800538 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-run-udev\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.800736 master-0 kubenswrapper[33013]: I0313 11:08:31.800715 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-file-lock-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.800983 master-0 kubenswrapper[33013]: I0313 11:08:31.800901 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-registration-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.800983 master-0 kubenswrapper[33013]: I0313 11:08:31.800947 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-pod-volumes-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.801063 master-0 kubenswrapper[33013]: I0313 11:08:31.800995 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-sys\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.801063 master-0 kubenswrapper[33013]: I0313 11:08:31.801012 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-node-plugin-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.801063 master-0 kubenswrapper[33013]: I0313 11:08:31.801035 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/afa79dc1-0a26-418c-a280-10ed988f4f40-metrics-cert\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.801156 master-0 kubenswrapper[33013]: I0313 11:08:31.801047 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-registration-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.801156 master-0 kubenswrapper[33013]: I0313 11:08:31.801057 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpt95\" (UniqueName: \"kubernetes.io/projected/afa79dc1-0a26-418c-a280-10ed988f4f40-kube-api-access-zpt95\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.801156 master-0 kubenswrapper[33013]: I0313 11:08:31.801101 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-sys\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.801156 master-0 kubenswrapper[33013]: I0313 11:08:31.801109 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-csi-plugin-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.801156 master-0 kubenswrapper[33013]: I0313 11:08:31.801149 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-pod-volumes-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.801156 master-0 kubenswrapper[33013]: I0313 11:08:31.801155 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-lvmd-config\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.801327 master-0 kubenswrapper[33013]: I0313 11:08:31.801180 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-device-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.801327 master-0 kubenswrapper[33013]: I0313 11:08:31.801261 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-device-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.802356 master-0 kubenswrapper[33013]: I0313 11:08:31.801533 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-csi-plugin-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.802356 master-0 kubenswrapper[33013]: I0313 11:08:31.801644 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-lvmd-config\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.802356 master-0 kubenswrapper[33013]: I0313 11:08:31.801793 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-node-plugin-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.802678 master-0 kubenswrapper[33013]: I0313 11:08:31.802660 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/afa79dc1-0a26-418c-a280-10ed988f4f40-file-lock-dir\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.811444 master-0 kubenswrapper[33013]: I0313 11:08:31.811400 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/afa79dc1-0a26-418c-a280-10ed988f4f40-metrics-cert\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.821442 master-0 kubenswrapper[33013]: I0313 11:08:31.821411 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpt95\" (UniqueName: \"kubernetes.io/projected/afa79dc1-0a26-418c-a280-10ed988f4f40-kube-api-access-zpt95\") pod \"vg-manager-8cz6q\" (UID: \"afa79dc1-0a26-418c-a280-10ed988f4f40\") " pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:31.849776 master-0 kubenswrapper[33013]: I0313 11:08:31.849305 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:32.305071 master-0 kubenswrapper[33013]: I0313 11:08:32.305003 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-8cz6q"] Mar 13 11:08:32.305071 master-0 kubenswrapper[33013]: W0313 11:08:32.305008 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafa79dc1_0a26_418c_a280_10ed988f4f40.slice/crio-b2319cb5a24c8f451685811b0d6af741705a4534a98c8df09a3e4f1509b1f568 WatchSource:0}: Error finding container b2319cb5a24c8f451685811b0d6af741705a4534a98c8df09a3e4f1509b1f568: Status 404 returned error can't find the container with id b2319cb5a24c8f451685811b0d6af741705a4534a98c8df09a3e4f1509b1f568 Mar 13 11:08:33.139698 master-0 kubenswrapper[33013]: I0313 11:08:33.139637 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-8cz6q" event={"ID":"afa79dc1-0a26-418c-a280-10ed988f4f40","Type":"ContainerStarted","Data":"172fd1ee2e1e6e6dcfaa34f90e13accef0c01545e9db3334df59253a14a6d238"} Mar 13 11:08:33.139698 master-0 kubenswrapper[33013]: I0313 11:08:33.139694 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-8cz6q" event={"ID":"afa79dc1-0a26-418c-a280-10ed988f4f40","Type":"ContainerStarted","Data":"b2319cb5a24c8f451685811b0d6af741705a4534a98c8df09a3e4f1509b1f568"} Mar 13 11:08:33.168806 master-0 kubenswrapper[33013]: I0313 11:08:33.168684 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-8cz6q" podStartSLOduration=2.168661205 podStartE2EDuration="2.168661205s" podCreationTimestamp="2026-03-13 11:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:08:33.161909966 +0000 UTC m=+696.637863325" watchObservedRunningTime="2026-03-13 11:08:33.168661205 +0000 UTC m=+696.644614554" Mar 13 11:08:34.049395 master-0 kubenswrapper[33013]: I0313 11:08:34.049336 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-z9dtp" Mar 13 11:08:35.164964 master-0 kubenswrapper[33013]: I0313 11:08:35.164270 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-8cz6q_afa79dc1-0a26-418c-a280-10ed988f4f40/vg-manager/0.log" Mar 13 11:08:35.164964 master-0 kubenswrapper[33013]: I0313 11:08:35.164320 33013 generic.go:334] "Generic (PLEG): container finished" podID="afa79dc1-0a26-418c-a280-10ed988f4f40" containerID="172fd1ee2e1e6e6dcfaa34f90e13accef0c01545e9db3334df59253a14a6d238" exitCode=1 Mar 13 11:08:35.164964 master-0 kubenswrapper[33013]: I0313 11:08:35.164351 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-8cz6q" event={"ID":"afa79dc1-0a26-418c-a280-10ed988f4f40","Type":"ContainerDied","Data":"172fd1ee2e1e6e6dcfaa34f90e13accef0c01545e9db3334df59253a14a6d238"} Mar 13 11:08:35.164964 master-0 kubenswrapper[33013]: I0313 11:08:35.164932 33013 scope.go:117] "RemoveContainer" containerID="172fd1ee2e1e6e6dcfaa34f90e13accef0c01545e9db3334df59253a14a6d238" Mar 13 11:08:35.499076 master-0 kubenswrapper[33013]: I0313 11:08:35.498920 33013 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 13 11:08:35.654477 master-0 kubenswrapper[33013]: I0313 11:08:35.654316 33013 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-13T11:08:35.499284779Z","Handler":null,"Name":""} Mar 13 11:08:35.673221 master-0 kubenswrapper[33013]: I0313 11:08:35.672732 33013 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 13 11:08:35.673221 master-0 kubenswrapper[33013]: I0313 11:08:35.672784 33013 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 13 11:08:36.174540 master-0 kubenswrapper[33013]: I0313 11:08:36.174494 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-8cz6q_afa79dc1-0a26-418c-a280-10ed988f4f40/vg-manager/0.log" Mar 13 11:08:36.176321 master-0 kubenswrapper[33013]: I0313 11:08:36.176276 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-8cz6q" event={"ID":"afa79dc1-0a26-418c-a280-10ed988f4f40","Type":"ContainerStarted","Data":"be221ef468656c083b2224399a5b9c0a5279417cdbbda01360d93449507da673"} Mar 13 11:08:38.582180 master-0 kubenswrapper[33013]: I0313 11:08:38.582092 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-m4b9f"] Mar 13 11:08:38.589561 master-0 kubenswrapper[33013]: I0313 11:08:38.589499 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m4b9f" Mar 13 11:08:38.592230 master-0 kubenswrapper[33013]: I0313 11:08:38.592187 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 13 11:08:38.593258 master-0 kubenswrapper[33013]: I0313 11:08:38.593225 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 13 11:08:38.604431 master-0 kubenswrapper[33013]: I0313 11:08:38.602232 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-m4b9f"] Mar 13 11:08:38.689360 master-0 kubenswrapper[33013]: I0313 11:08:38.689299 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfqw2\" (UniqueName: \"kubernetes.io/projected/8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd-kube-api-access-tfqw2\") pod \"openstack-operator-index-m4b9f\" (UID: \"8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd\") " pod="openstack-operators/openstack-operator-index-m4b9f" Mar 13 11:08:38.794910 master-0 kubenswrapper[33013]: I0313 11:08:38.794841 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfqw2\" (UniqueName: \"kubernetes.io/projected/8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd-kube-api-access-tfqw2\") pod \"openstack-operator-index-m4b9f\" (UID: \"8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd\") " pod="openstack-operators/openstack-operator-index-m4b9f" Mar 13 11:08:38.817381 master-0 kubenswrapper[33013]: I0313 11:08:38.817322 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfqw2\" (UniqueName: \"kubernetes.io/projected/8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd-kube-api-access-tfqw2\") pod \"openstack-operator-index-m4b9f\" (UID: \"8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd\") " pod="openstack-operators/openstack-operator-index-m4b9f" Mar 13 11:08:38.933540 master-0 kubenswrapper[33013]: I0313 11:08:38.933350 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m4b9f" Mar 13 11:08:39.549651 master-0 kubenswrapper[33013]: I0313 11:08:39.549609 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-m4b9f"] Mar 13 11:08:40.213761 master-0 kubenswrapper[33013]: I0313 11:08:40.213688 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m4b9f" event={"ID":"8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd","Type":"ContainerStarted","Data":"ae05bd20c5da4d98093ee54cd17d23a903d52e02875df9ad9e3317207083ab58"} Mar 13 11:08:41.223603 master-0 kubenswrapper[33013]: I0313 11:08:41.223512 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m4b9f" event={"ID":"8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd","Type":"ContainerStarted","Data":"52df88e7fd1c835ff7abdf290aa3bfeb6aacf40046e661e3d50bd94caa425c72"} Mar 13 11:08:41.246167 master-0 kubenswrapper[33013]: I0313 11:08:41.246080 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-m4b9f" podStartSLOduration=2.35108931 podStartE2EDuration="3.246055507s" podCreationTimestamp="2026-03-13 11:08:38 +0000 UTC" firstStartedPulling="2026-03-13 11:08:39.546662357 +0000 UTC m=+703.022615706" lastFinishedPulling="2026-03-13 11:08:40.441628514 +0000 UTC m=+703.917581903" observedRunningTime="2026-03-13 11:08:41.23901703 +0000 UTC m=+704.714970399" watchObservedRunningTime="2026-03-13 11:08:41.246055507 +0000 UTC m=+704.722008856" Mar 13 11:08:41.850921 master-0 kubenswrapper[33013]: I0313 11:08:41.850865 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:41.854552 master-0 kubenswrapper[33013]: I0313 11:08:41.854534 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:42.129308 master-0 kubenswrapper[33013]: I0313 11:08:42.129158 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-75bbf545c6-v5b28" podUID="9970752c-2c89-447e-a248-73504d39e4e6" containerName="console" containerID="cri-o://aeedc22f8d900911dbfcedcfed286c7c821421267a5d27439b32f8e6d653a5a3" gracePeriod=15 Mar 13 11:08:42.236446 master-0 kubenswrapper[33013]: I0313 11:08:42.236364 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:42.237561 master-0 kubenswrapper[33013]: I0313 11:08:42.237513 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-8cz6q" Mar 13 11:08:42.779033 master-0 kubenswrapper[33013]: I0313 11:08:42.778872 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75bbf545c6-v5b28_9970752c-2c89-447e-a248-73504d39e4e6/console/0.log" Mar 13 11:08:42.779033 master-0 kubenswrapper[33013]: I0313 11:08:42.778995 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:08:42.803118 master-0 kubenswrapper[33013]: I0313 11:08:42.803052 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-m4b9f"] Mar 13 11:08:42.902684 master-0 kubenswrapper[33013]: I0313 11:08:42.901533 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-service-ca\") pod \"9970752c-2c89-447e-a248-73504d39e4e6\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " Mar 13 11:08:42.902684 master-0 kubenswrapper[33013]: I0313 11:08:42.901651 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf7fd\" (UniqueName: \"kubernetes.io/projected/9970752c-2c89-447e-a248-73504d39e4e6-kube-api-access-qf7fd\") pod \"9970752c-2c89-447e-a248-73504d39e4e6\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " Mar 13 11:08:42.902684 master-0 kubenswrapper[33013]: I0313 11:08:42.901758 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-trusted-ca-bundle\") pod \"9970752c-2c89-447e-a248-73504d39e4e6\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " Mar 13 11:08:42.902684 master-0 kubenswrapper[33013]: I0313 11:08:42.901810 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9970752c-2c89-447e-a248-73504d39e4e6-console-serving-cert\") pod \"9970752c-2c89-447e-a248-73504d39e4e6\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " Mar 13 11:08:42.902684 master-0 kubenswrapper[33013]: I0313 11:08:42.901845 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-console-config\") pod \"9970752c-2c89-447e-a248-73504d39e4e6\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " Mar 13 11:08:42.902684 master-0 kubenswrapper[33013]: I0313 11:08:42.901889 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9970752c-2c89-447e-a248-73504d39e4e6-console-oauth-config\") pod \"9970752c-2c89-447e-a248-73504d39e4e6\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " Mar 13 11:08:42.902684 master-0 kubenswrapper[33013]: I0313 11:08:42.901934 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-oauth-serving-cert\") pod \"9970752c-2c89-447e-a248-73504d39e4e6\" (UID: \"9970752c-2c89-447e-a248-73504d39e4e6\") " Mar 13 11:08:42.902684 master-0 kubenswrapper[33013]: I0313 11:08:42.902303 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-service-ca" (OuterVolumeSpecName: "service-ca") pod "9970752c-2c89-447e-a248-73504d39e4e6" (UID: "9970752c-2c89-447e-a248-73504d39e4e6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:08:42.902684 master-0 kubenswrapper[33013]: I0313 11:08:42.902468 33013 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 11:08:42.903191 master-0 kubenswrapper[33013]: I0313 11:08:42.902828 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9970752c-2c89-447e-a248-73504d39e4e6" (UID: "9970752c-2c89-447e-a248-73504d39e4e6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:08:42.903191 master-0 kubenswrapper[33013]: I0313 11:08:42.903012 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-console-config" (OuterVolumeSpecName: "console-config") pod "9970752c-2c89-447e-a248-73504d39e4e6" (UID: "9970752c-2c89-447e-a248-73504d39e4e6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:08:42.903466 master-0 kubenswrapper[33013]: I0313 11:08:42.903432 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9970752c-2c89-447e-a248-73504d39e4e6" (UID: "9970752c-2c89-447e-a248-73504d39e4e6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:08:42.908638 master-0 kubenswrapper[33013]: I0313 11:08:42.908199 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9970752c-2c89-447e-a248-73504d39e4e6-kube-api-access-qf7fd" (OuterVolumeSpecName: "kube-api-access-qf7fd") pod "9970752c-2c89-447e-a248-73504d39e4e6" (UID: "9970752c-2c89-447e-a248-73504d39e4e6"). InnerVolumeSpecName "kube-api-access-qf7fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:08:42.910442 master-0 kubenswrapper[33013]: I0313 11:08:42.910391 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9970752c-2c89-447e-a248-73504d39e4e6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9970752c-2c89-447e-a248-73504d39e4e6" (UID: "9970752c-2c89-447e-a248-73504d39e4e6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:08:42.914080 master-0 kubenswrapper[33013]: I0313 11:08:42.914030 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9970752c-2c89-447e-a248-73504d39e4e6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9970752c-2c89-447e-a248-73504d39e4e6" (UID: "9970752c-2c89-447e-a248-73504d39e4e6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:08:43.005418 master-0 kubenswrapper[33013]: I0313 11:08:43.005343 33013 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:08:43.005418 master-0 kubenswrapper[33013]: I0313 11:08:43.005402 33013 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9970752c-2c89-447e-a248-73504d39e4e6-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 11:08:43.005418 master-0 kubenswrapper[33013]: I0313 11:08:43.005413 33013 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:08:43.005418 master-0 kubenswrapper[33013]: I0313 11:08:43.005423 33013 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9970752c-2c89-447e-a248-73504d39e4e6-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:08:43.005418 master-0 kubenswrapper[33013]: I0313 11:08:43.005436 33013 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9970752c-2c89-447e-a248-73504d39e4e6-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 11:08:43.005418 master-0 kubenswrapper[33013]: I0313 11:08:43.005445 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf7fd\" (UniqueName: \"kubernetes.io/projected/9970752c-2c89-447e-a248-73504d39e4e6-kube-api-access-qf7fd\") on node \"master-0\" DevicePath \"\"" Mar 13 11:08:43.250071 master-0 kubenswrapper[33013]: I0313 11:08:43.250033 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75bbf545c6-v5b28_9970752c-2c89-447e-a248-73504d39e4e6/console/0.log" Mar 13 11:08:43.250685 master-0 kubenswrapper[33013]: I0313 11:08:43.250085 33013 generic.go:334] "Generic (PLEG): container finished" podID="9970752c-2c89-447e-a248-73504d39e4e6" containerID="aeedc22f8d900911dbfcedcfed286c7c821421267a5d27439b32f8e6d653a5a3" exitCode=2 Mar 13 11:08:43.250685 master-0 kubenswrapper[33013]: I0313 11:08:43.250406 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75bbf545c6-v5b28" event={"ID":"9970752c-2c89-447e-a248-73504d39e4e6","Type":"ContainerDied","Data":"aeedc22f8d900911dbfcedcfed286c7c821421267a5d27439b32f8e6d653a5a3"} Mar 13 11:08:43.250685 master-0 kubenswrapper[33013]: I0313 11:08:43.250521 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75bbf545c6-v5b28" event={"ID":"9970752c-2c89-447e-a248-73504d39e4e6","Type":"ContainerDied","Data":"c9c962429d52f4e09577f51cd3f80bde7c23d67053505d501606b7910e2c038c"} Mar 13 11:08:43.250685 master-0 kubenswrapper[33013]: I0313 11:08:43.250545 33013 scope.go:117] "RemoveContainer" containerID="aeedc22f8d900911dbfcedcfed286c7c821421267a5d27439b32f8e6d653a5a3" Mar 13 11:08:43.250685 master-0 kubenswrapper[33013]: I0313 11:08:43.250669 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75bbf545c6-v5b28" Mar 13 11:08:43.250855 master-0 kubenswrapper[33013]: I0313 11:08:43.250810 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-m4b9f" podUID="8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd" containerName="registry-server" containerID="cri-o://52df88e7fd1c835ff7abdf290aa3bfeb6aacf40046e661e3d50bd94caa425c72" gracePeriod=2 Mar 13 11:08:43.288885 master-0 kubenswrapper[33013]: I0313 11:08:43.288817 33013 scope.go:117] "RemoveContainer" containerID="aeedc22f8d900911dbfcedcfed286c7c821421267a5d27439b32f8e6d653a5a3" Mar 13 11:08:43.291329 master-0 kubenswrapper[33013]: E0313 11:08:43.290035 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeedc22f8d900911dbfcedcfed286c7c821421267a5d27439b32f8e6d653a5a3\": container with ID starting with aeedc22f8d900911dbfcedcfed286c7c821421267a5d27439b32f8e6d653a5a3 not found: ID does not exist" containerID="aeedc22f8d900911dbfcedcfed286c7c821421267a5d27439b32f8e6d653a5a3" Mar 13 11:08:43.291329 master-0 kubenswrapper[33013]: I0313 11:08:43.290091 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeedc22f8d900911dbfcedcfed286c7c821421267a5d27439b32f8e6d653a5a3"} err="failed to get container status \"aeedc22f8d900911dbfcedcfed286c7c821421267a5d27439b32f8e6d653a5a3\": rpc error: code = NotFound desc = could not find container \"aeedc22f8d900911dbfcedcfed286c7c821421267a5d27439b32f8e6d653a5a3\": container with ID starting with aeedc22f8d900911dbfcedcfed286c7c821421267a5d27439b32f8e6d653a5a3 not found: ID does not exist" Mar 13 11:08:43.783616 master-0 kubenswrapper[33013]: I0313 11:08:43.783506 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vr6tx"] Mar 13 11:08:43.784659 master-0 kubenswrapper[33013]: E0313 11:08:43.783994 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9970752c-2c89-447e-a248-73504d39e4e6" containerName="console" Mar 13 11:08:43.784659 master-0 kubenswrapper[33013]: I0313 11:08:43.784010 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="9970752c-2c89-447e-a248-73504d39e4e6" containerName="console" Mar 13 11:08:43.784659 master-0 kubenswrapper[33013]: I0313 11:08:43.784191 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="9970752c-2c89-447e-a248-73504d39e4e6" containerName="console" Mar 13 11:08:43.784885 master-0 kubenswrapper[33013]: I0313 11:08:43.784844 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vr6tx" Mar 13 11:08:43.922432 master-0 kubenswrapper[33013]: I0313 11:08:43.922243 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmk77\" (UniqueName: \"kubernetes.io/projected/f0ea3cb4-9c05-49bf-b950-04885fd9a6bb-kube-api-access-lmk77\") pod \"openstack-operator-index-vr6tx\" (UID: \"f0ea3cb4-9c05-49bf-b950-04885fd9a6bb\") " pod="openstack-operators/openstack-operator-index-vr6tx" Mar 13 11:08:43.997410 master-0 kubenswrapper[33013]: I0313 11:08:43.997365 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m4b9f" Mar 13 11:08:44.024017 master-0 kubenswrapper[33013]: I0313 11:08:44.023963 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmk77\" (UniqueName: \"kubernetes.io/projected/f0ea3cb4-9c05-49bf-b950-04885fd9a6bb-kube-api-access-lmk77\") pod \"openstack-operator-index-vr6tx\" (UID: \"f0ea3cb4-9c05-49bf-b950-04885fd9a6bb\") " pod="openstack-operators/openstack-operator-index-vr6tx" Mar 13 11:08:44.126377 master-0 kubenswrapper[33013]: I0313 11:08:44.126312 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfqw2\" (UniqueName: \"kubernetes.io/projected/8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd-kube-api-access-tfqw2\") pod \"8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd\" (UID: \"8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd\") " Mar 13 11:08:44.131089 master-0 kubenswrapper[33013]: I0313 11:08:44.131053 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd-kube-api-access-tfqw2" (OuterVolumeSpecName: "kube-api-access-tfqw2") pod "8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd" (UID: "8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd"). InnerVolumeSpecName "kube-api-access-tfqw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:08:44.169278 master-0 kubenswrapper[33013]: I0313 11:08:44.168574 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vr6tx"] Mar 13 11:08:44.210610 master-0 kubenswrapper[33013]: I0313 11:08:44.210473 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmk77\" (UniqueName: \"kubernetes.io/projected/f0ea3cb4-9c05-49bf-b950-04885fd9a6bb-kube-api-access-lmk77\") pod \"openstack-operator-index-vr6tx\" (UID: \"f0ea3cb4-9c05-49bf-b950-04885fd9a6bb\") " pod="openstack-operators/openstack-operator-index-vr6tx" Mar 13 11:08:44.210610 master-0 kubenswrapper[33013]: I0313 11:08:44.210550 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-75bbf545c6-v5b28"] Mar 13 11:08:44.219023 master-0 kubenswrapper[33013]: I0313 11:08:44.216643 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-75bbf545c6-v5b28"] Mar 13 11:08:44.229240 master-0 kubenswrapper[33013]: I0313 11:08:44.229188 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfqw2\" (UniqueName: \"kubernetes.io/projected/8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd-kube-api-access-tfqw2\") on node \"master-0\" DevicePath \"\"" Mar 13 11:08:44.265693 master-0 kubenswrapper[33013]: I0313 11:08:44.265639 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m4b9f" Mar 13 11:08:44.266526 master-0 kubenswrapper[33013]: I0313 11:08:44.265707 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m4b9f" event={"ID":"8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd","Type":"ContainerDied","Data":"52df88e7fd1c835ff7abdf290aa3bfeb6aacf40046e661e3d50bd94caa425c72"} Mar 13 11:08:44.266526 master-0 kubenswrapper[33013]: I0313 11:08:44.265769 33013 scope.go:117] "RemoveContainer" containerID="52df88e7fd1c835ff7abdf290aa3bfeb6aacf40046e661e3d50bd94caa425c72" Mar 13 11:08:44.266699 master-0 kubenswrapper[33013]: I0313 11:08:44.266666 33013 generic.go:334] "Generic (PLEG): container finished" podID="8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd" containerID="52df88e7fd1c835ff7abdf290aa3bfeb6aacf40046e661e3d50bd94caa425c72" exitCode=0 Mar 13 11:08:44.267023 master-0 kubenswrapper[33013]: I0313 11:08:44.266989 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m4b9f" event={"ID":"8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd","Type":"ContainerDied","Data":"ae05bd20c5da4d98093ee54cd17d23a903d52e02875df9ad9e3317207083ab58"} Mar 13 11:08:44.294825 master-0 kubenswrapper[33013]: I0313 11:08:44.294773 33013 scope.go:117] "RemoveContainer" containerID="52df88e7fd1c835ff7abdf290aa3bfeb6aacf40046e661e3d50bd94caa425c72" Mar 13 11:08:44.295559 master-0 kubenswrapper[33013]: E0313 11:08:44.295465 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52df88e7fd1c835ff7abdf290aa3bfeb6aacf40046e661e3d50bd94caa425c72\": container with ID starting with 52df88e7fd1c835ff7abdf290aa3bfeb6aacf40046e661e3d50bd94caa425c72 not found: ID does not exist" containerID="52df88e7fd1c835ff7abdf290aa3bfeb6aacf40046e661e3d50bd94caa425c72" Mar 13 11:08:44.295813 master-0 kubenswrapper[33013]: I0313 11:08:44.295783 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52df88e7fd1c835ff7abdf290aa3bfeb6aacf40046e661e3d50bd94caa425c72"} err="failed to get container status \"52df88e7fd1c835ff7abdf290aa3bfeb6aacf40046e661e3d50bd94caa425c72\": rpc error: code = NotFound desc = could not find container \"52df88e7fd1c835ff7abdf290aa3bfeb6aacf40046e661e3d50bd94caa425c72\": container with ID starting with 52df88e7fd1c835ff7abdf290aa3bfeb6aacf40046e661e3d50bd94caa425c72 not found: ID does not exist" Mar 13 11:08:44.317102 master-0 kubenswrapper[33013]: I0313 11:08:44.317054 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-m4b9f"] Mar 13 11:08:44.325095 master-0 kubenswrapper[33013]: I0313 11:08:44.325021 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-m4b9f"] Mar 13 11:08:44.404506 master-0 kubenswrapper[33013]: I0313 11:08:44.403344 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vr6tx" Mar 13 11:08:44.672733 master-0 kubenswrapper[33013]: I0313 11:08:44.672650 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vr6tx"] Mar 13 11:08:44.681966 master-0 kubenswrapper[33013]: W0313 11:08:44.681917 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0ea3cb4_9c05_49bf_b950_04885fd9a6bb.slice/crio-8d541106d4a1a4d7b91b1d46595d18c9c2f09cc8d6c92177b4a1f6d359ef92b6 WatchSource:0}: Error finding container 8d541106d4a1a4d7b91b1d46595d18c9c2f09cc8d6c92177b4a1f6d359ef92b6: Status 404 returned error can't find the container with id 8d541106d4a1a4d7b91b1d46595d18c9c2f09cc8d6c92177b4a1f6d359ef92b6 Mar 13 11:08:44.727912 master-0 kubenswrapper[33013]: I0313 11:08:44.727684 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd" path="/var/lib/kubelet/pods/8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd/volumes" Mar 13 11:08:44.728493 master-0 kubenswrapper[33013]: I0313 11:08:44.728462 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9970752c-2c89-447e-a248-73504d39e4e6" path="/var/lib/kubelet/pods/9970752c-2c89-447e-a248-73504d39e4e6/volumes" Mar 13 11:08:45.278697 master-0 kubenswrapper[33013]: I0313 11:08:45.278550 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vr6tx" event={"ID":"f0ea3cb4-9c05-49bf-b950-04885fd9a6bb","Type":"ContainerStarted","Data":"8d541106d4a1a4d7b91b1d46595d18c9c2f09cc8d6c92177b4a1f6d359ef92b6"} Mar 13 11:08:46.289100 master-0 kubenswrapper[33013]: I0313 11:08:46.288911 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vr6tx" event={"ID":"f0ea3cb4-9c05-49bf-b950-04885fd9a6bb","Type":"ContainerStarted","Data":"fae06e5c75e559e88e566027b0790ce46cf783e5e5a2414b4c270de9151f8a82"} Mar 13 11:08:46.305559 master-0 kubenswrapper[33013]: I0313 11:08:46.305470 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vr6tx" podStartSLOduration=2.832224592 podStartE2EDuration="3.305451565s" podCreationTimestamp="2026-03-13 11:08:43 +0000 UTC" firstStartedPulling="2026-03-13 11:08:44.687353233 +0000 UTC m=+708.163306582" lastFinishedPulling="2026-03-13 11:08:45.160580206 +0000 UTC m=+708.636533555" observedRunningTime="2026-03-13 11:08:46.304937211 +0000 UTC m=+709.780890560" watchObservedRunningTime="2026-03-13 11:08:46.305451565 +0000 UTC m=+709.781404924" Mar 13 11:08:54.404713 master-0 kubenswrapper[33013]: I0313 11:08:54.404636 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-vr6tx" Mar 13 11:08:54.404713 master-0 kubenswrapper[33013]: I0313 11:08:54.404728 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-vr6tx" Mar 13 11:08:54.442420 master-0 kubenswrapper[33013]: I0313 11:08:54.442324 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-vr6tx" Mar 13 11:08:55.403642 master-0 kubenswrapper[33013]: I0313 11:08:55.403555 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-vr6tx" Mar 13 11:09:01.839554 master-0 kubenswrapper[33013]: I0313 11:09:01.839485 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727"] Mar 13 11:09:01.841004 master-0 kubenswrapper[33013]: E0313 11:09:01.840979 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd" containerName="registry-server" Mar 13 11:09:01.841113 master-0 kubenswrapper[33013]: I0313 11:09:01.841098 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd" containerName="registry-server" Mar 13 11:09:01.841457 master-0 kubenswrapper[33013]: I0313 11:09:01.841441 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf8ff4c-18d2-4fda-bdd5-955cf3d513fd" containerName="registry-server" Mar 13 11:09:01.843285 master-0 kubenswrapper[33013]: I0313 11:09:01.843261 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" Mar 13 11:09:01.872343 master-0 kubenswrapper[33013]: I0313 11:09:01.872301 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727"] Mar 13 11:09:01.972799 master-0 kubenswrapper[33013]: I0313 11:09:01.972746 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnll4\" (UniqueName: \"kubernetes.io/projected/b04b5ac2-d315-41e5-8445-777d303651dc-kube-api-access-tnll4\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727\" (UID: \"b04b5ac2-d315-41e5-8445-777d303651dc\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" Mar 13 11:09:01.973132 master-0 kubenswrapper[33013]: I0313 11:09:01.973112 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b04b5ac2-d315-41e5-8445-777d303651dc-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727\" (UID: \"b04b5ac2-d315-41e5-8445-777d303651dc\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" Mar 13 11:09:01.973374 master-0 kubenswrapper[33013]: I0313 11:09:01.973360 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b04b5ac2-d315-41e5-8445-777d303651dc-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727\" (UID: \"b04b5ac2-d315-41e5-8445-777d303651dc\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" Mar 13 11:09:02.075154 master-0 kubenswrapper[33013]: I0313 11:09:02.075068 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b04b5ac2-d315-41e5-8445-777d303651dc-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727\" (UID: \"b04b5ac2-d315-41e5-8445-777d303651dc\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" Mar 13 11:09:02.075617 master-0 kubenswrapper[33013]: I0313 11:09:02.075180 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnll4\" (UniqueName: \"kubernetes.io/projected/b04b5ac2-d315-41e5-8445-777d303651dc-kube-api-access-tnll4\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727\" (UID: \"b04b5ac2-d315-41e5-8445-777d303651dc\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" Mar 13 11:09:02.075617 master-0 kubenswrapper[33013]: I0313 11:09:02.075222 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b04b5ac2-d315-41e5-8445-777d303651dc-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727\" (UID: \"b04b5ac2-d315-41e5-8445-777d303651dc\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" Mar 13 11:09:02.075916 master-0 kubenswrapper[33013]: I0313 11:09:02.075857 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b04b5ac2-d315-41e5-8445-777d303651dc-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727\" (UID: \"b04b5ac2-d315-41e5-8445-777d303651dc\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" Mar 13 11:09:02.075976 master-0 kubenswrapper[33013]: I0313 11:09:02.075930 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b04b5ac2-d315-41e5-8445-777d303651dc-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727\" (UID: \"b04b5ac2-d315-41e5-8445-777d303651dc\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" Mar 13 11:09:02.099710 master-0 kubenswrapper[33013]: I0313 11:09:02.099184 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnll4\" (UniqueName: \"kubernetes.io/projected/b04b5ac2-d315-41e5-8445-777d303651dc-kube-api-access-tnll4\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727\" (UID: \"b04b5ac2-d315-41e5-8445-777d303651dc\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" Mar 13 11:09:02.177292 master-0 kubenswrapper[33013]: I0313 11:09:02.177215 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" Mar 13 11:09:02.646660 master-0 kubenswrapper[33013]: I0313 11:09:02.645771 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727"] Mar 13 11:09:02.655434 master-0 kubenswrapper[33013]: W0313 11:09:02.655387 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb04b5ac2_d315_41e5_8445_777d303651dc.slice/crio-7fc916d3bc65d0d582475fb9a44024b3eb4bca9c3b1bbefe9bbcfa8fc6c66c26 WatchSource:0}: Error finding container 7fc916d3bc65d0d582475fb9a44024b3eb4bca9c3b1bbefe9bbcfa8fc6c66c26: Status 404 returned error can't find the container with id 7fc916d3bc65d0d582475fb9a44024b3eb4bca9c3b1bbefe9bbcfa8fc6c66c26 Mar 13 11:09:03.464722 master-0 kubenswrapper[33013]: I0313 11:09:03.463031 33013 generic.go:334] "Generic (PLEG): container finished" podID="b04b5ac2-d315-41e5-8445-777d303651dc" containerID="9e0c82e9707971b6c86c9aae3df81f3caa40f6cd3adde010e718790da51a39ee" exitCode=0 Mar 13 11:09:03.464722 master-0 kubenswrapper[33013]: I0313 11:09:03.463100 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" event={"ID":"b04b5ac2-d315-41e5-8445-777d303651dc","Type":"ContainerDied","Data":"9e0c82e9707971b6c86c9aae3df81f3caa40f6cd3adde010e718790da51a39ee"} Mar 13 11:09:03.464722 master-0 kubenswrapper[33013]: I0313 11:09:03.463132 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" event={"ID":"b04b5ac2-d315-41e5-8445-777d303651dc","Type":"ContainerStarted","Data":"7fc916d3bc65d0d582475fb9a44024b3eb4bca9c3b1bbefe9bbcfa8fc6c66c26"} Mar 13 11:09:05.489221 master-0 kubenswrapper[33013]: I0313 11:09:05.489139 33013 generic.go:334] "Generic (PLEG): container finished" podID="b04b5ac2-d315-41e5-8445-777d303651dc" containerID="40a7568fbe44b1e2e02ed01eb9e142e0c0abf48802258df8c0694db676dddf5d" exitCode=0 Mar 13 11:09:05.489998 master-0 kubenswrapper[33013]: I0313 11:09:05.489224 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" event={"ID":"b04b5ac2-d315-41e5-8445-777d303651dc","Type":"ContainerDied","Data":"40a7568fbe44b1e2e02ed01eb9e142e0c0abf48802258df8c0694db676dddf5d"} Mar 13 11:09:06.499158 master-0 kubenswrapper[33013]: I0313 11:09:06.499086 33013 generic.go:334] "Generic (PLEG): container finished" podID="b04b5ac2-d315-41e5-8445-777d303651dc" containerID="9e347ac3e0aef6e02795970f5983eedbf751abf136dde6a6a69ecfd7139b9afb" exitCode=0 Mar 13 11:09:06.499158 master-0 kubenswrapper[33013]: I0313 11:09:06.499141 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" event={"ID":"b04b5ac2-d315-41e5-8445-777d303651dc","Type":"ContainerDied","Data":"9e347ac3e0aef6e02795970f5983eedbf751abf136dde6a6a69ecfd7139b9afb"} Mar 13 11:09:07.866764 master-0 kubenswrapper[33013]: I0313 11:09:07.866699 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" Mar 13 11:09:07.999097 master-0 kubenswrapper[33013]: I0313 11:09:07.999002 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b04b5ac2-d315-41e5-8445-777d303651dc-bundle\") pod \"b04b5ac2-d315-41e5-8445-777d303651dc\" (UID: \"b04b5ac2-d315-41e5-8445-777d303651dc\") " Mar 13 11:09:07.999097 master-0 kubenswrapper[33013]: I0313 11:09:07.999097 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnll4\" (UniqueName: \"kubernetes.io/projected/b04b5ac2-d315-41e5-8445-777d303651dc-kube-api-access-tnll4\") pod \"b04b5ac2-d315-41e5-8445-777d303651dc\" (UID: \"b04b5ac2-d315-41e5-8445-777d303651dc\") " Mar 13 11:09:07.999809 master-0 kubenswrapper[33013]: I0313 11:09:07.999128 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b04b5ac2-d315-41e5-8445-777d303651dc-util\") pod \"b04b5ac2-d315-41e5-8445-777d303651dc\" (UID: \"b04b5ac2-d315-41e5-8445-777d303651dc\") " Mar 13 11:09:08.000100 master-0 kubenswrapper[33013]: I0313 11:09:07.999903 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b04b5ac2-d315-41e5-8445-777d303651dc-bundle" (OuterVolumeSpecName: "bundle") pod "b04b5ac2-d315-41e5-8445-777d303651dc" (UID: "b04b5ac2-d315-41e5-8445-777d303651dc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:09:08.004016 master-0 kubenswrapper[33013]: I0313 11:09:08.003967 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b04b5ac2-d315-41e5-8445-777d303651dc-kube-api-access-tnll4" (OuterVolumeSpecName: "kube-api-access-tnll4") pod "b04b5ac2-d315-41e5-8445-777d303651dc" (UID: "b04b5ac2-d315-41e5-8445-777d303651dc"). InnerVolumeSpecName "kube-api-access-tnll4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:09:08.012329 master-0 kubenswrapper[33013]: I0313 11:09:08.012240 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b04b5ac2-d315-41e5-8445-777d303651dc-util" (OuterVolumeSpecName: "util") pod "b04b5ac2-d315-41e5-8445-777d303651dc" (UID: "b04b5ac2-d315-41e5-8445-777d303651dc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:09:08.101827 master-0 kubenswrapper[33013]: I0313 11:09:08.101656 33013 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b04b5ac2-d315-41e5-8445-777d303651dc-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:09:08.101827 master-0 kubenswrapper[33013]: I0313 11:09:08.101733 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnll4\" (UniqueName: \"kubernetes.io/projected/b04b5ac2-d315-41e5-8445-777d303651dc-kube-api-access-tnll4\") on node \"master-0\" DevicePath \"\"" Mar 13 11:09:08.101827 master-0 kubenswrapper[33013]: I0313 11:09:08.101749 33013 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b04b5ac2-d315-41e5-8445-777d303651dc-util\") on node \"master-0\" DevicePath \"\"" Mar 13 11:09:08.517815 master-0 kubenswrapper[33013]: I0313 11:09:08.517555 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" event={"ID":"b04b5ac2-d315-41e5-8445-777d303651dc","Type":"ContainerDied","Data":"7fc916d3bc65d0d582475fb9a44024b3eb4bca9c3b1bbefe9bbcfa8fc6c66c26"} Mar 13 11:09:08.517815 master-0 kubenswrapper[33013]: I0313 11:09:08.517625 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477gm727" Mar 13 11:09:08.517815 master-0 kubenswrapper[33013]: I0313 11:09:08.517631 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fc916d3bc65d0d582475fb9a44024b3eb4bca9c3b1bbefe9bbcfa8fc6c66c26" Mar 13 11:09:15.105909 master-0 kubenswrapper[33013]: I0313 11:09:15.105821 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-fxkjv"] Mar 13 11:09:15.106672 master-0 kubenswrapper[33013]: E0313 11:09:15.106323 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04b5ac2-d315-41e5-8445-777d303651dc" containerName="pull" Mar 13 11:09:15.106672 master-0 kubenswrapper[33013]: I0313 11:09:15.106343 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04b5ac2-d315-41e5-8445-777d303651dc" containerName="pull" Mar 13 11:09:15.106672 master-0 kubenswrapper[33013]: E0313 11:09:15.106394 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04b5ac2-d315-41e5-8445-777d303651dc" containerName="extract" Mar 13 11:09:15.106672 master-0 kubenswrapper[33013]: I0313 11:09:15.106402 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04b5ac2-d315-41e5-8445-777d303651dc" containerName="extract" Mar 13 11:09:15.106672 master-0 kubenswrapper[33013]: E0313 11:09:15.106427 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04b5ac2-d315-41e5-8445-777d303651dc" containerName="util" Mar 13 11:09:15.106672 master-0 kubenswrapper[33013]: I0313 11:09:15.106436 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04b5ac2-d315-41e5-8445-777d303651dc" containerName="util" Mar 13 11:09:15.106861 master-0 kubenswrapper[33013]: I0313 11:09:15.106699 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04b5ac2-d315-41e5-8445-777d303651dc" containerName="extract" Mar 13 11:09:15.107377 master-0 kubenswrapper[33013]: I0313 11:09:15.107353 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-fxkjv" Mar 13 11:09:15.229616 master-0 kubenswrapper[33013]: I0313 11:09:15.223572 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-fxkjv"] Mar 13 11:09:15.242623 master-0 kubenswrapper[33013]: I0313 11:09:15.236681 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmtsc\" (UniqueName: \"kubernetes.io/projected/49132e99-b72c-4301-a75b-b9b1abb39c64-kube-api-access-gmtsc\") pod \"openstack-operator-controller-init-65b9994cf8-fxkjv\" (UID: \"49132e99-b72c-4301-a75b-b9b1abb39c64\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-fxkjv" Mar 13 11:09:15.338159 master-0 kubenswrapper[33013]: I0313 11:09:15.338083 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmtsc\" (UniqueName: \"kubernetes.io/projected/49132e99-b72c-4301-a75b-b9b1abb39c64-kube-api-access-gmtsc\") pod \"openstack-operator-controller-init-65b9994cf8-fxkjv\" (UID: \"49132e99-b72c-4301-a75b-b9b1abb39c64\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-fxkjv" Mar 13 11:09:15.361732 master-0 kubenswrapper[33013]: I0313 11:09:15.361330 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmtsc\" (UniqueName: \"kubernetes.io/projected/49132e99-b72c-4301-a75b-b9b1abb39c64-kube-api-access-gmtsc\") pod \"openstack-operator-controller-init-65b9994cf8-fxkjv\" (UID: \"49132e99-b72c-4301-a75b-b9b1abb39c64\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-fxkjv" Mar 13 11:09:15.434352 master-0 kubenswrapper[33013]: I0313 11:09:15.434285 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-fxkjv" Mar 13 11:09:15.910017 master-0 kubenswrapper[33013]: I0313 11:09:15.909952 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-fxkjv"] Mar 13 11:09:16.612430 master-0 kubenswrapper[33013]: I0313 11:09:16.612355 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-fxkjv" event={"ID":"49132e99-b72c-4301-a75b-b9b1abb39c64","Type":"ContainerStarted","Data":"30b3891fcd99997a427a34a0e2cd07dd4f885e11d6ee10b722e20c0698d66a00"} Mar 13 11:09:21.679151 master-0 kubenswrapper[33013]: I0313 11:09:21.679078 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-fxkjv" event={"ID":"49132e99-b72c-4301-a75b-b9b1abb39c64","Type":"ContainerStarted","Data":"eff20f80da67a3d8ed9b73086f923f03972bcc213c278e954356eccfccedb077"} Mar 13 11:09:21.680079 master-0 kubenswrapper[33013]: I0313 11:09:21.680039 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-fxkjv" Mar 13 11:09:21.720059 master-0 kubenswrapper[33013]: I0313 11:09:21.719963 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-fxkjv" podStartSLOduration=1.778425156 podStartE2EDuration="6.719938478s" podCreationTimestamp="2026-03-13 11:09:15 +0000 UTC" firstStartedPulling="2026-03-13 11:09:15.922679328 +0000 UTC m=+739.398632677" lastFinishedPulling="2026-03-13 11:09:20.86419265 +0000 UTC m=+744.340145999" observedRunningTime="2026-03-13 11:09:21.711666066 +0000 UTC m=+745.187619415" watchObservedRunningTime="2026-03-13 11:09:21.719938478 +0000 UTC m=+745.195891827" Mar 13 11:09:35.437811 master-0 kubenswrapper[33013]: I0313 11:09:35.437729 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-fxkjv" Mar 13 11:09:55.845282 master-0 kubenswrapper[33013]: I0313 11:09:55.845125 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-xx9n2"] Mar 13 11:09:55.847612 master-0 kubenswrapper[33013]: I0313 11:09:55.846384 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-xx9n2" Mar 13 11:09:55.907621 master-0 kubenswrapper[33013]: I0313 11:09:55.907037 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5n22\" (UniqueName: \"kubernetes.io/projected/72eda6f5-7ceb-41a5-a145-c823bf409279-kube-api-access-n5n22\") pod \"barbican-operator-controller-manager-677bd678f7-xx9n2\" (UID: \"72eda6f5-7ceb-41a5-a145-c823bf409279\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-xx9n2" Mar 13 11:09:55.934617 master-0 kubenswrapper[33013]: I0313 11:09:55.927015 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-pgqh9"] Mar 13 11:09:55.934617 master-0 kubenswrapper[33013]: I0313 11:09:55.929082 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-pgqh9" Mar 13 11:09:55.963614 master-0 kubenswrapper[33013]: I0313 11:09:55.962706 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-xx9n2"] Mar 13 11:09:55.986626 master-0 kubenswrapper[33013]: I0313 11:09:55.977748 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-2t4jt"] Mar 13 11:09:55.986626 master-0 kubenswrapper[33013]: I0313 11:09:55.980027 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-2t4jt" Mar 13 11:09:56.018622 master-0 kubenswrapper[33013]: I0313 11:09:56.017186 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2knb\" (UniqueName: \"kubernetes.io/projected/b3b82413-ba4e-4c33-8b26-b300aef4c26a-kube-api-access-c2knb\") pod \"cinder-operator-controller-manager-984cd4dcf-pgqh9\" (UID: \"b3b82413-ba4e-4c33-8b26-b300aef4c26a\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-pgqh9" Mar 13 11:09:56.018622 master-0 kubenswrapper[33013]: I0313 11:09:56.017324 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5n22\" (UniqueName: \"kubernetes.io/projected/72eda6f5-7ceb-41a5-a145-c823bf409279-kube-api-access-n5n22\") pod \"barbican-operator-controller-manager-677bd678f7-xx9n2\" (UID: \"72eda6f5-7ceb-41a5-a145-c823bf409279\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-xx9n2" Mar 13 11:09:56.128758 master-0 kubenswrapper[33013]: I0313 11:09:56.114741 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-2t4jt"] Mar 13 11:09:56.128758 master-0 kubenswrapper[33013]: I0313 11:09:56.128080 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2knb\" (UniqueName: \"kubernetes.io/projected/b3b82413-ba4e-4c33-8b26-b300aef4c26a-kube-api-access-c2knb\") pod \"cinder-operator-controller-manager-984cd4dcf-pgqh9\" (UID: \"b3b82413-ba4e-4c33-8b26-b300aef4c26a\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-pgqh9" Mar 13 11:09:56.128758 master-0 kubenswrapper[33013]: I0313 11:09:56.128161 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ntww\" (UniqueName: \"kubernetes.io/projected/306ace2b-16a3-4373-ba6b-e2ed8f8d9d89-kube-api-access-6ntww\") pod \"designate-operator-controller-manager-66d56f6ff4-2t4jt\" (UID: \"306ace2b-16a3-4373-ba6b-e2ed8f8d9d89\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-2t4jt" Mar 13 11:09:56.156354 master-0 kubenswrapper[33013]: I0313 11:09:56.156282 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5n22\" (UniqueName: \"kubernetes.io/projected/72eda6f5-7ceb-41a5-a145-c823bf409279-kube-api-access-n5n22\") pod \"barbican-operator-controller-manager-677bd678f7-xx9n2\" (UID: \"72eda6f5-7ceb-41a5-a145-c823bf409279\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-xx9n2" Mar 13 11:09:56.217026 master-0 kubenswrapper[33013]: I0313 11:09:56.215161 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-pgqh9"] Mar 13 11:09:56.227149 master-0 kubenswrapper[33013]: I0313 11:09:56.225362 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2knb\" (UniqueName: \"kubernetes.io/projected/b3b82413-ba4e-4c33-8b26-b300aef4c26a-kube-api-access-c2knb\") pod \"cinder-operator-controller-manager-984cd4dcf-pgqh9\" (UID: \"b3b82413-ba4e-4c33-8b26-b300aef4c26a\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-pgqh9" Mar 13 11:09:56.231351 master-0 kubenswrapper[33013]: I0313 11:09:56.231276 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ntww\" (UniqueName: \"kubernetes.io/projected/306ace2b-16a3-4373-ba6b-e2ed8f8d9d89-kube-api-access-6ntww\") pod \"designate-operator-controller-manager-66d56f6ff4-2t4jt\" (UID: \"306ace2b-16a3-4373-ba6b-e2ed8f8d9d89\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-2t4jt" Mar 13 11:09:56.238992 master-0 kubenswrapper[33013]: I0313 11:09:56.233654 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-p6pnl"] Mar 13 11:09:56.238992 master-0 kubenswrapper[33013]: I0313 11:09:56.234979 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-p6pnl" Mar 13 11:09:56.250423 master-0 kubenswrapper[33013]: I0313 11:09:56.250292 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-kqwhl"] Mar 13 11:09:56.253272 master-0 kubenswrapper[33013]: I0313 11:09:56.251508 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kqwhl" Mar 13 11:09:56.255648 master-0 kubenswrapper[33013]: I0313 11:09:56.255599 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-xx9n2" Mar 13 11:09:56.284211 master-0 kubenswrapper[33013]: I0313 11:09:56.283645 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-p6pnl"] Mar 13 11:09:56.311442 master-0 kubenswrapper[33013]: I0313 11:09:56.311362 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-6rf77"] Mar 13 11:09:56.313624 master-0 kubenswrapper[33013]: I0313 11:09:56.313022 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-6rf77" Mar 13 11:09:56.319352 master-0 kubenswrapper[33013]: I0313 11:09:56.319309 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-pgqh9" Mar 13 11:09:56.336017 master-0 kubenswrapper[33013]: I0313 11:09:56.334927 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7hkz\" (UniqueName: \"kubernetes.io/projected/b1a6f67c-186b-46ec-a1c5-284bfed80fca-kube-api-access-j7hkz\") pod \"heat-operator-controller-manager-77b6666d85-kqwhl\" (UID: \"b1a6f67c-186b-46ec-a1c5-284bfed80fca\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kqwhl" Mar 13 11:09:56.336017 master-0 kubenswrapper[33013]: I0313 11:09:56.335010 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbfpx\" (UniqueName: \"kubernetes.io/projected/1dcceaeb-7636-406b-b013-b76f9a71bee7-kube-api-access-dbfpx\") pod \"horizon-operator-controller-manager-6d9d6b584d-6rf77\" (UID: \"1dcceaeb-7636-406b-b013-b76f9a71bee7\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-6rf77" Mar 13 11:09:56.336017 master-0 kubenswrapper[33013]: I0313 11:09:56.335065 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cjmg\" (UniqueName: \"kubernetes.io/projected/c786eb3c-e9a9-4184-8729-c6f379982a73-kube-api-access-5cjmg\") pod \"glance-operator-controller-manager-5964f64c48-p6pnl\" (UID: \"c786eb3c-e9a9-4184-8729-c6f379982a73\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-p6pnl" Mar 13 11:09:56.347785 master-0 kubenswrapper[33013]: I0313 11:09:56.347608 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ntww\" (UniqueName: \"kubernetes.io/projected/306ace2b-16a3-4373-ba6b-e2ed8f8d9d89-kube-api-access-6ntww\") pod \"designate-operator-controller-manager-66d56f6ff4-2t4jt\" (UID: \"306ace2b-16a3-4373-ba6b-e2ed8f8d9d89\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-2t4jt" Mar 13 11:09:56.388312 master-0 kubenswrapper[33013]: I0313 11:09:56.388237 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-kqwhl"] Mar 13 11:09:56.408892 master-0 kubenswrapper[33013]: I0313 11:09:56.408455 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf"] Mar 13 11:09:56.413835 master-0 kubenswrapper[33013]: I0313 11:09:56.413036 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-2t4jt" Mar 13 11:09:56.416872 master-0 kubenswrapper[33013]: I0313 11:09:56.414694 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:09:56.432144 master-0 kubenswrapper[33013]: I0313 11:09:56.430605 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 13 11:09:56.452650 master-0 kubenswrapper[33013]: I0313 11:09:56.451605 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cjmg\" (UniqueName: \"kubernetes.io/projected/c786eb3c-e9a9-4184-8729-c6f379982a73-kube-api-access-5cjmg\") pod \"glance-operator-controller-manager-5964f64c48-p6pnl\" (UID: \"c786eb3c-e9a9-4184-8729-c6f379982a73\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-p6pnl" Mar 13 11:09:56.452650 master-0 kubenswrapper[33013]: I0313 11:09:56.451766 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z72mn\" (UniqueName: \"kubernetes.io/projected/3670abc6-2527-4580-bf31-36cc0294afd4-kube-api-access-z72mn\") pod \"infra-operator-controller-manager-b8c8d7cc8-gstmf\" (UID: \"3670abc6-2527-4580-bf31-36cc0294afd4\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:09:56.452650 master-0 kubenswrapper[33013]: I0313 11:09:56.451824 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7hkz\" (UniqueName: \"kubernetes.io/projected/b1a6f67c-186b-46ec-a1c5-284bfed80fca-kube-api-access-j7hkz\") pod \"heat-operator-controller-manager-77b6666d85-kqwhl\" (UID: \"b1a6f67c-186b-46ec-a1c5-284bfed80fca\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kqwhl" Mar 13 11:09:56.452650 master-0 kubenswrapper[33013]: I0313 11:09:56.451857 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-gstmf\" (UID: \"3670abc6-2527-4580-bf31-36cc0294afd4\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:09:56.452650 master-0 kubenswrapper[33013]: I0313 11:09:56.451927 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbfpx\" (UniqueName: \"kubernetes.io/projected/1dcceaeb-7636-406b-b013-b76f9a71bee7-kube-api-access-dbfpx\") pod \"horizon-operator-controller-manager-6d9d6b584d-6rf77\" (UID: \"1dcceaeb-7636-406b-b013-b76f9a71bee7\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-6rf77" Mar 13 11:09:56.465811 master-0 kubenswrapper[33013]: I0313 11:09:56.465393 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-6rf77"] Mar 13 11:09:56.494571 master-0 kubenswrapper[33013]: I0313 11:09:56.494492 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbfpx\" (UniqueName: \"kubernetes.io/projected/1dcceaeb-7636-406b-b013-b76f9a71bee7-kube-api-access-dbfpx\") pod \"horizon-operator-controller-manager-6d9d6b584d-6rf77\" (UID: \"1dcceaeb-7636-406b-b013-b76f9a71bee7\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-6rf77" Mar 13 11:09:56.506642 master-0 kubenswrapper[33013]: I0313 11:09:56.506456 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7hkz\" (UniqueName: \"kubernetes.io/projected/b1a6f67c-186b-46ec-a1c5-284bfed80fca-kube-api-access-j7hkz\") pod \"heat-operator-controller-manager-77b6666d85-kqwhl\" (UID: \"b1a6f67c-186b-46ec-a1c5-284bfed80fca\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kqwhl" Mar 13 11:09:56.544632 master-0 kubenswrapper[33013]: I0313 11:09:56.543419 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cjmg\" (UniqueName: \"kubernetes.io/projected/c786eb3c-e9a9-4184-8729-c6f379982a73-kube-api-access-5cjmg\") pod \"glance-operator-controller-manager-5964f64c48-p6pnl\" (UID: \"c786eb3c-e9a9-4184-8729-c6f379982a73\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-p6pnl" Mar 13 11:09:56.547630 master-0 kubenswrapper[33013]: I0313 11:09:56.545745 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf"] Mar 13 11:09:56.561488 master-0 kubenswrapper[33013]: I0313 11:09:56.558461 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z72mn\" (UniqueName: \"kubernetes.io/projected/3670abc6-2527-4580-bf31-36cc0294afd4-kube-api-access-z72mn\") pod \"infra-operator-controller-manager-b8c8d7cc8-gstmf\" (UID: \"3670abc6-2527-4580-bf31-36cc0294afd4\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:09:56.561488 master-0 kubenswrapper[33013]: I0313 11:09:56.558569 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-gstmf\" (UID: \"3670abc6-2527-4580-bf31-36cc0294afd4\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:09:56.561488 master-0 kubenswrapper[33013]: E0313 11:09:56.559010 33013 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 11:09:56.561488 master-0 kubenswrapper[33013]: E0313 11:09:56.559082 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert podName:3670abc6-2527-4580-bf31-36cc0294afd4 nodeName:}" failed. No retries permitted until 2026-03-13 11:09:57.059057721 +0000 UTC m=+780.535011070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert") pod "infra-operator-controller-manager-b8c8d7cc8-gstmf" (UID: "3670abc6-2527-4580-bf31-36cc0294afd4") : secret "infra-operator-webhook-server-cert" not found Mar 13 11:09:56.561488 master-0 kubenswrapper[33013]: I0313 11:09:56.560257 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-g2vp6"] Mar 13 11:09:56.565491 master-0 kubenswrapper[33013]: I0313 11:09:56.564050 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-g2vp6" Mar 13 11:09:56.595757 master-0 kubenswrapper[33013]: I0313 11:09:56.594887 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z72mn\" (UniqueName: \"kubernetes.io/projected/3670abc6-2527-4580-bf31-36cc0294afd4-kube-api-access-z72mn\") pod \"infra-operator-controller-manager-b8c8d7cc8-gstmf\" (UID: \"3670abc6-2527-4580-bf31-36cc0294afd4\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:09:56.599977 master-0 kubenswrapper[33013]: I0313 11:09:56.598115 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-g2vp6"] Mar 13 11:09:56.619631 master-0 kubenswrapper[33013]: I0313 11:09:56.617857 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-p6pnl" Mar 13 11:09:56.674653 master-0 kubenswrapper[33013]: I0313 11:09:56.671561 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-x79wp"] Mar 13 11:09:56.674653 master-0 kubenswrapper[33013]: I0313 11:09:56.673810 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5mnm\" (UniqueName: \"kubernetes.io/projected/8d2f8f95-c1d5-48d5-a0ea-c172906cbd9f-kube-api-access-n5mnm\") pod \"ironic-operator-controller-manager-6bbb499bbc-g2vp6\" (UID: \"8d2f8f95-c1d5-48d5-a0ea-c172906cbd9f\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-g2vp6" Mar 13 11:09:56.681894 master-0 kubenswrapper[33013]: I0313 11:09:56.677142 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-x79wp" Mar 13 11:09:56.682298 master-0 kubenswrapper[33013]: I0313 11:09:56.682212 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-x79wp"] Mar 13 11:09:56.704812 master-0 kubenswrapper[33013]: I0313 11:09:56.704743 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kqwhl" Mar 13 11:09:56.729610 master-0 kubenswrapper[33013]: I0313 11:09:56.729545 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-6rf77" Mar 13 11:09:56.781634 master-0 kubenswrapper[33013]: I0313 11:09:56.781455 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwrqz\" (UniqueName: \"kubernetes.io/projected/dc9351b2-a2f7-40bc-bcdb-b27629a9a77f-kube-api-access-xwrqz\") pod \"keystone-operator-controller-manager-684f77d66d-x79wp\" (UID: \"dc9351b2-a2f7-40bc-bcdb-b27629a9a77f\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-x79wp" Mar 13 11:09:56.781634 master-0 kubenswrapper[33013]: I0313 11:09:56.781568 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5mnm\" (UniqueName: \"kubernetes.io/projected/8d2f8f95-c1d5-48d5-a0ea-c172906cbd9f-kube-api-access-n5mnm\") pod \"ironic-operator-controller-manager-6bbb499bbc-g2vp6\" (UID: \"8d2f8f95-c1d5-48d5-a0ea-c172906cbd9f\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-g2vp6" Mar 13 11:09:56.790489 master-0 kubenswrapper[33013]: I0313 11:09:56.790447 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-7vspc"] Mar 13 11:09:56.792968 master-0 kubenswrapper[33013]: I0313 11:09:56.792937 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-d77xf"] Mar 13 11:09:56.793183 master-0 kubenswrapper[33013]: I0313 11:09:56.793158 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7vspc" Mar 13 11:09:56.799219 master-0 kubenswrapper[33013]: I0313 11:09:56.797874 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-7vspc"] Mar 13 11:09:56.799219 master-0 kubenswrapper[33013]: I0313 11:09:56.797931 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-c78nz"] Mar 13 11:09:56.799687 master-0 kubenswrapper[33013]: I0313 11:09:56.799649 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-c78nz" Mar 13 11:09:56.799923 master-0 kubenswrapper[33013]: I0313 11:09:56.799895 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-d77xf" Mar 13 11:09:56.804764 master-0 kubenswrapper[33013]: I0313 11:09:56.804018 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-c78nz"] Mar 13 11:09:56.817690 master-0 kubenswrapper[33013]: I0313 11:09:56.816206 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-qb894"] Mar 13 11:09:56.830578 master-0 kubenswrapper[33013]: I0313 11:09:56.819020 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qb894" Mar 13 11:09:56.830578 master-0 kubenswrapper[33013]: I0313 11:09:56.820549 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5mnm\" (UniqueName: \"kubernetes.io/projected/8d2f8f95-c1d5-48d5-a0ea-c172906cbd9f-kube-api-access-n5mnm\") pod \"ironic-operator-controller-manager-6bbb499bbc-g2vp6\" (UID: \"8d2f8f95-c1d5-48d5-a0ea-c172906cbd9f\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-g2vp6" Mar 13 11:09:56.833008 master-0 kubenswrapper[33013]: I0313 11:09:56.832936 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-d77xf"] Mar 13 11:09:56.876198 master-0 kubenswrapper[33013]: I0313 11:09:56.876112 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-qb894"] Mar 13 11:09:56.888469 master-0 kubenswrapper[33013]: I0313 11:09:56.888304 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-54bbr"] Mar 13 11:09:56.891667 master-0 kubenswrapper[33013]: I0313 11:09:56.890873 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhk6l\" (UniqueName: \"kubernetes.io/projected/57e2bcf4-7a93-426b-943c-f5a5b187190d-kube-api-access-fhk6l\") pod \"nova-operator-controller-manager-569cc54c5-qb894\" (UID: \"57e2bcf4-7a93-426b-943c-f5a5b187190d\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qb894" Mar 13 11:09:56.891667 master-0 kubenswrapper[33013]: I0313 11:09:56.890927 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwrqz\" (UniqueName: \"kubernetes.io/projected/dc9351b2-a2f7-40bc-bcdb-b27629a9a77f-kube-api-access-xwrqz\") pod \"keystone-operator-controller-manager-684f77d66d-x79wp\" (UID: \"dc9351b2-a2f7-40bc-bcdb-b27629a9a77f\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-x79wp" Mar 13 11:09:56.891667 master-0 kubenswrapper[33013]: I0313 11:09:56.890968 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg6tg\" (UniqueName: \"kubernetes.io/projected/bf8f932c-8ce0-43b0-9bdb-c307a671bb43-kube-api-access-vg6tg\") pod \"manila-operator-controller-manager-68f45f9d9f-7vspc\" (UID: \"bf8f932c-8ce0-43b0-9bdb-c307a671bb43\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7vspc" Mar 13 11:09:56.891667 master-0 kubenswrapper[33013]: I0313 11:09:56.891014 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnls6\" (UniqueName: \"kubernetes.io/projected/ebe5af93-c06f-4bef-83e6-ea978ff533b4-kube-api-access-gnls6\") pod \"mariadb-operator-controller-manager-658d4cdd5-d77xf\" (UID: \"ebe5af93-c06f-4bef-83e6-ea978ff533b4\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-d77xf" Mar 13 11:09:56.891667 master-0 kubenswrapper[33013]: I0313 11:09:56.891120 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4pcd\" (UniqueName: \"kubernetes.io/projected/bca7c27a-ec95-4e7b-8a46-35e6b8cc9f9f-kube-api-access-c4pcd\") pod \"neutron-operator-controller-manager-776c5696bf-c78nz\" (UID: \"bca7c27a-ec95-4e7b-8a46-35e6b8cc9f9f\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-c78nz" Mar 13 11:09:56.898557 master-0 kubenswrapper[33013]: I0313 11:09:56.897268 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-54bbr" Mar 13 11:09:56.912285 master-0 kubenswrapper[33013]: I0313 11:09:56.910721 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-54bbr"] Mar 13 11:09:56.924094 master-0 kubenswrapper[33013]: I0313 11:09:56.923701 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-g2vp6" Mar 13 11:09:56.951725 master-0 kubenswrapper[33013]: I0313 11:09:56.951661 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwrqz\" (UniqueName: \"kubernetes.io/projected/dc9351b2-a2f7-40bc-bcdb-b27629a9a77f-kube-api-access-xwrqz\") pod \"keystone-operator-controller-manager-684f77d66d-x79wp\" (UID: \"dc9351b2-a2f7-40bc-bcdb-b27629a9a77f\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-x79wp" Mar 13 11:09:56.965128 master-0 kubenswrapper[33013]: I0313 11:09:56.965067 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk"] Mar 13 11:09:56.966539 master-0 kubenswrapper[33013]: I0313 11:09:56.966505 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:09:56.973579 master-0 kubenswrapper[33013]: I0313 11:09:56.970007 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 13 11:09:56.991255 master-0 kubenswrapper[33013]: I0313 11:09:56.988493 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-srfks"] Mar 13 11:09:56.991255 master-0 kubenswrapper[33013]: I0313 11:09:56.990306 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-srfks" Mar 13 11:09:56.994738 master-0 kubenswrapper[33013]: I0313 11:09:56.992545 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg6tg\" (UniqueName: \"kubernetes.io/projected/bf8f932c-8ce0-43b0-9bdb-c307a671bb43-kube-api-access-vg6tg\") pod \"manila-operator-controller-manager-68f45f9d9f-7vspc\" (UID: \"bf8f932c-8ce0-43b0-9bdb-c307a671bb43\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7vspc" Mar 13 11:09:56.994738 master-0 kubenswrapper[33013]: I0313 11:09:56.992641 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnls6\" (UniqueName: \"kubernetes.io/projected/ebe5af93-c06f-4bef-83e6-ea978ff533b4-kube-api-access-gnls6\") pod \"mariadb-operator-controller-manager-658d4cdd5-d77xf\" (UID: \"ebe5af93-c06f-4bef-83e6-ea978ff533b4\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-d77xf" Mar 13 11:09:56.994738 master-0 kubenswrapper[33013]: I0313 11:09:56.992765 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldhm\" (UniqueName: \"kubernetes.io/projected/a112961d-e636-441e-b353-0d71f573d7ff-kube-api-access-7ldhm\") pod \"octavia-operator-controller-manager-5f4f55cb5c-54bbr\" (UID: \"a112961d-e636-441e-b353-0d71f573d7ff\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-54bbr" Mar 13 11:09:56.994738 master-0 kubenswrapper[33013]: I0313 11:09:56.992830 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4pcd\" (UniqueName: \"kubernetes.io/projected/bca7c27a-ec95-4e7b-8a46-35e6b8cc9f9f-kube-api-access-c4pcd\") pod \"neutron-operator-controller-manager-776c5696bf-c78nz\" (UID: \"bca7c27a-ec95-4e7b-8a46-35e6b8cc9f9f\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-c78nz" Mar 13 11:09:56.994738 master-0 kubenswrapper[33013]: I0313 11:09:56.992897 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhk6l\" (UniqueName: \"kubernetes.io/projected/57e2bcf4-7a93-426b-943c-f5a5b187190d-kube-api-access-fhk6l\") pod \"nova-operator-controller-manager-569cc54c5-qb894\" (UID: \"57e2bcf4-7a93-426b-943c-f5a5b187190d\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qb894" Mar 13 11:09:57.007720 master-0 kubenswrapper[33013]: I0313 11:09:57.006679 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk"] Mar 13 11:09:57.018846 master-0 kubenswrapper[33013]: I0313 11:09:57.018741 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhk6l\" (UniqueName: \"kubernetes.io/projected/57e2bcf4-7a93-426b-943c-f5a5b187190d-kube-api-access-fhk6l\") pod \"nova-operator-controller-manager-569cc54c5-qb894\" (UID: \"57e2bcf4-7a93-426b-943c-f5a5b187190d\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qb894" Mar 13 11:09:57.024070 master-0 kubenswrapper[33013]: I0313 11:09:57.021166 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qb894" Mar 13 11:09:57.026987 master-0 kubenswrapper[33013]: I0313 11:09:57.026498 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4pcd\" (UniqueName: \"kubernetes.io/projected/bca7c27a-ec95-4e7b-8a46-35e6b8cc9f9f-kube-api-access-c4pcd\") pod \"neutron-operator-controller-manager-776c5696bf-c78nz\" (UID: \"bca7c27a-ec95-4e7b-8a46-35e6b8cc9f9f\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-c78nz" Mar 13 11:09:57.034204 master-0 kubenswrapper[33013]: I0313 11:09:57.031311 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-srfks"] Mar 13 11:09:57.035673 master-0 kubenswrapper[33013]: I0313 11:09:57.034738 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnls6\" (UniqueName: \"kubernetes.io/projected/ebe5af93-c06f-4bef-83e6-ea978ff533b4-kube-api-access-gnls6\") pod \"mariadb-operator-controller-manager-658d4cdd5-d77xf\" (UID: \"ebe5af93-c06f-4bef-83e6-ea978ff533b4\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-d77xf" Mar 13 11:09:57.049104 master-0 kubenswrapper[33013]: I0313 11:09:57.048412 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg6tg\" (UniqueName: \"kubernetes.io/projected/bf8f932c-8ce0-43b0-9bdb-c307a671bb43-kube-api-access-vg6tg\") pod \"manila-operator-controller-manager-68f45f9d9f-7vspc\" (UID: \"bf8f932c-8ce0-43b0-9bdb-c307a671bb43\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7vspc" Mar 13 11:09:57.075098 master-0 kubenswrapper[33013]: I0313 11:09:57.075052 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-ss27q"] Mar 13 11:09:57.078239 master-0 kubenswrapper[33013]: I0313 11:09:57.078213 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-ss27q" Mar 13 11:09:57.078641 master-0 kubenswrapper[33013]: I0313 11:09:57.078539 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-x79wp" Mar 13 11:09:57.088173 master-0 kubenswrapper[33013]: I0313 11:09:57.088115 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-ckxss"] Mar 13 11:09:57.090053 master-0 kubenswrapper[33013]: I0313 11:09:57.090013 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-ckxss" Mar 13 11:09:57.095767 master-0 kubenswrapper[33013]: I0313 11:09:57.095707 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk\" (UID: \"07aae105-cbfc-4df6-97ee-2231d7611d03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:09:57.095871 master-0 kubenswrapper[33013]: I0313 11:09:57.095819 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjn5b\" (UniqueName: \"kubernetes.io/projected/c6673eb2-9dc9-48da-b64f-ac7aa72aae98-kube-api-access-gjn5b\") pod \"ovn-operator-controller-manager-bbc5b68f9-srfks\" (UID: \"c6673eb2-9dc9-48da-b64f-ac7aa72aae98\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-srfks" Mar 13 11:09:57.096226 master-0 kubenswrapper[33013]: I0313 11:09:57.095926 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnq2z\" (UniqueName: \"kubernetes.io/projected/07aae105-cbfc-4df6-97ee-2231d7611d03-kube-api-access-tnq2z\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk\" (UID: \"07aae105-cbfc-4df6-97ee-2231d7611d03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:09:57.096226 master-0 kubenswrapper[33013]: I0313 11:09:57.095982 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-gstmf\" (UID: \"3670abc6-2527-4580-bf31-36cc0294afd4\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:09:57.096226 master-0 kubenswrapper[33013]: I0313 11:09:57.096030 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ldhm\" (UniqueName: \"kubernetes.io/projected/a112961d-e636-441e-b353-0d71f573d7ff-kube-api-access-7ldhm\") pod \"octavia-operator-controller-manager-5f4f55cb5c-54bbr\" (UID: \"a112961d-e636-441e-b353-0d71f573d7ff\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-54bbr" Mar 13 11:09:57.096632 master-0 kubenswrapper[33013]: E0313 11:09:57.096579 33013 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 11:09:57.096701 master-0 kubenswrapper[33013]: E0313 11:09:57.096678 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert podName:3670abc6-2527-4580-bf31-36cc0294afd4 nodeName:}" failed. No retries permitted until 2026-03-13 11:09:58.096651655 +0000 UTC m=+781.572605004 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert") pod "infra-operator-controller-manager-b8c8d7cc8-gstmf" (UID: "3670abc6-2527-4580-bf31-36cc0294afd4") : secret "infra-operator-webhook-server-cert" not found Mar 13 11:09:57.117415 master-0 kubenswrapper[33013]: I0313 11:09:57.116885 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-ss27q"] Mar 13 11:09:57.130641 master-0 kubenswrapper[33013]: I0313 11:09:57.130382 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ldhm\" (UniqueName: \"kubernetes.io/projected/a112961d-e636-441e-b353-0d71f573d7ff-kube-api-access-7ldhm\") pod \"octavia-operator-controller-manager-5f4f55cb5c-54bbr\" (UID: \"a112961d-e636-441e-b353-0d71f573d7ff\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-54bbr" Mar 13 11:09:57.139938 master-0 kubenswrapper[33013]: I0313 11:09:57.135717 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-pgrw9"] Mar 13 11:09:57.139938 master-0 kubenswrapper[33013]: I0313 11:09:57.137217 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-pgrw9" Mar 13 11:09:57.139938 master-0 kubenswrapper[33013]: I0313 11:09:57.138699 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7vspc" Mar 13 11:09:57.156519 master-0 kubenswrapper[33013]: I0313 11:09:57.156471 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-ckxss"] Mar 13 11:09:57.165810 master-0 kubenswrapper[33013]: I0313 11:09:57.165700 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-pgrw9"] Mar 13 11:09:57.175324 master-0 kubenswrapper[33013]: I0313 11:09:57.175271 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-87rhv"] Mar 13 11:09:57.179176 master-0 kubenswrapper[33013]: I0313 11:09:57.178992 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-87rhv" Mar 13 11:09:57.185725 master-0 kubenswrapper[33013]: I0313 11:09:57.183430 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-87rhv"] Mar 13 11:09:57.219791 master-0 kubenswrapper[33013]: I0313 11:09:57.218722 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-b6rqx"] Mar 13 11:09:57.234903 master-0 kubenswrapper[33013]: I0313 11:09:57.220214 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-b6rqx" Mar 13 11:09:57.234903 master-0 kubenswrapper[33013]: I0313 11:09:57.222521 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-b6rqx"] Mar 13 11:09:57.234903 master-0 kubenswrapper[33013]: I0313 11:09:57.228379 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw7hk\" (UniqueName: \"kubernetes.io/projected/cc6d57f2-8d08-48fc-ab2c-7e3e8a00560b-kube-api-access-vw7hk\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-pgrw9\" (UID: \"cc6d57f2-8d08-48fc-ab2c-7e3e8a00560b\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-pgrw9" Mar 13 11:09:57.234903 master-0 kubenswrapper[33013]: I0313 11:09:57.228575 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk\" (UID: \"07aae105-cbfc-4df6-97ee-2231d7611d03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:09:57.234903 master-0 kubenswrapper[33013]: I0313 11:09:57.228745 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjn5b\" (UniqueName: \"kubernetes.io/projected/c6673eb2-9dc9-48da-b64f-ac7aa72aae98-kube-api-access-gjn5b\") pod \"ovn-operator-controller-manager-bbc5b68f9-srfks\" (UID: \"c6673eb2-9dc9-48da-b64f-ac7aa72aae98\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-srfks" Mar 13 11:09:57.234903 master-0 kubenswrapper[33013]: I0313 11:09:57.228881 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl6cb\" (UniqueName: \"kubernetes.io/projected/5771aaff-3b51-43f6-886d-8c9beb93d212-kube-api-access-gl6cb\") pod \"placement-operator-controller-manager-574d45c66c-ckxss\" (UID: \"5771aaff-3b51-43f6-886d-8c9beb93d212\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-ckxss" Mar 13 11:09:57.234903 master-0 kubenswrapper[33013]: I0313 11:09:57.229007 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnq2z\" (UniqueName: \"kubernetes.io/projected/07aae105-cbfc-4df6-97ee-2231d7611d03-kube-api-access-tnq2z\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk\" (UID: \"07aae105-cbfc-4df6-97ee-2231d7611d03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:09:57.234903 master-0 kubenswrapper[33013]: I0313 11:09:57.229100 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t4bv\" (UniqueName: \"kubernetes.io/projected/f5eb3916-1c7f-434f-a559-141a2b52d2c3-kube-api-access-6t4bv\") pod \"swift-operator-controller-manager-677c674df7-ss27q\" (UID: \"f5eb3916-1c7f-434f-a559-141a2b52d2c3\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-ss27q" Mar 13 11:09:57.234903 master-0 kubenswrapper[33013]: E0313 11:09:57.229421 33013 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 11:09:57.234903 master-0 kubenswrapper[33013]: E0313 11:09:57.229482 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert podName:07aae105-cbfc-4df6-97ee-2231d7611d03 nodeName:}" failed. No retries permitted until 2026-03-13 11:09:57.729463601 +0000 UTC m=+781.205416950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" (UID: "07aae105-cbfc-4df6-97ee-2231d7611d03") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 11:09:57.254735 master-0 kubenswrapper[33013]: I0313 11:09:57.248228 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn"] Mar 13 11:09:57.254735 master-0 kubenswrapper[33013]: I0313 11:09:57.250126 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:09:57.254735 master-0 kubenswrapper[33013]: I0313 11:09:57.251737 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 13 11:09:57.254735 master-0 kubenswrapper[33013]: I0313 11:09:57.252374 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 13 11:09:57.262273 master-0 kubenswrapper[33013]: I0313 11:09:57.262228 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn"] Mar 13 11:09:57.279093 master-0 kubenswrapper[33013]: I0313 11:09:57.273797 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnq2z\" (UniqueName: \"kubernetes.io/projected/07aae105-cbfc-4df6-97ee-2231d7611d03-kube-api-access-tnq2z\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk\" (UID: \"07aae105-cbfc-4df6-97ee-2231d7611d03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:09:57.283043 master-0 kubenswrapper[33013]: I0313 11:09:57.282998 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-c78nz" Mar 13 11:09:57.295838 master-0 kubenswrapper[33013]: I0313 11:09:57.295677 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-d77xf" Mar 13 11:09:57.301296 master-0 kubenswrapper[33013]: I0313 11:09:57.301263 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjn5b\" (UniqueName: \"kubernetes.io/projected/c6673eb2-9dc9-48da-b64f-ac7aa72aae98-kube-api-access-gjn5b\") pod \"ovn-operator-controller-manager-bbc5b68f9-srfks\" (UID: \"c6673eb2-9dc9-48da-b64f-ac7aa72aae98\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-srfks" Mar 13 11:09:57.313716 master-0 kubenswrapper[33013]: I0313 11:09:57.312646 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xsksr"] Mar 13 11:09:57.314962 master-0 kubenswrapper[33013]: I0313 11:09:57.313989 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xsksr" Mar 13 11:09:57.321911 master-0 kubenswrapper[33013]: I0313 11:09:57.321730 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xsksr"] Mar 13 11:09:57.332915 master-0 kubenswrapper[33013]: I0313 11:09:57.331395 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flzp6\" (UniqueName: \"kubernetes.io/projected/43c4e3b3-c529-4106-9948-6c9731fffb4c-kube-api-access-flzp6\") pod \"watcher-operator-controller-manager-6dd88c6f67-b6rqx\" (UID: \"43c4e3b3-c529-4106-9948-6c9731fffb4c\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-b6rqx" Mar 13 11:09:57.332915 master-0 kubenswrapper[33013]: I0313 11:09:57.331458 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw7hk\" (UniqueName: \"kubernetes.io/projected/cc6d57f2-8d08-48fc-ab2c-7e3e8a00560b-kube-api-access-vw7hk\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-pgrw9\" (UID: \"cc6d57f2-8d08-48fc-ab2c-7e3e8a00560b\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-pgrw9" Mar 13 11:09:57.332915 master-0 kubenswrapper[33013]: I0313 11:09:57.331503 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jldbv\" (UniqueName: \"kubernetes.io/projected/baede48e-55be-4e10-ad58-d36d1a72d782-kube-api-access-jldbv\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:09:57.332915 master-0 kubenswrapper[33013]: I0313 11:09:57.331529 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:09:57.332915 master-0 kubenswrapper[33013]: I0313 11:09:57.331634 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtbjv\" (UniqueName: \"kubernetes.io/projected/8062a09f-5b4d-4f2c-bf7a-468a9c04f706-kube-api-access-xtbjv\") pod \"test-operator-controller-manager-5c5cb9c4d7-87rhv\" (UID: \"8062a09f-5b4d-4f2c-bf7a-468a9c04f706\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-87rhv" Mar 13 11:09:57.332915 master-0 kubenswrapper[33013]: I0313 11:09:57.331716 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl6cb\" (UniqueName: \"kubernetes.io/projected/5771aaff-3b51-43f6-886d-8c9beb93d212-kube-api-access-gl6cb\") pod \"placement-operator-controller-manager-574d45c66c-ckxss\" (UID: \"5771aaff-3b51-43f6-886d-8c9beb93d212\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-ckxss" Mar 13 11:09:57.332915 master-0 kubenswrapper[33013]: I0313 11:09:57.331802 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:09:57.332915 master-0 kubenswrapper[33013]: I0313 11:09:57.331864 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t4bv\" (UniqueName: \"kubernetes.io/projected/f5eb3916-1c7f-434f-a559-141a2b52d2c3-kube-api-access-6t4bv\") pod \"swift-operator-controller-manager-677c674df7-ss27q\" (UID: \"f5eb3916-1c7f-434f-a559-141a2b52d2c3\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-ss27q" Mar 13 11:09:57.345123 master-0 kubenswrapper[33013]: I0313 11:09:57.344160 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-54bbr" Mar 13 11:09:57.368708 master-0 kubenswrapper[33013]: I0313 11:09:57.368653 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw7hk\" (UniqueName: \"kubernetes.io/projected/cc6d57f2-8d08-48fc-ab2c-7e3e8a00560b-kube-api-access-vw7hk\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-pgrw9\" (UID: \"cc6d57f2-8d08-48fc-ab2c-7e3e8a00560b\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-pgrw9" Mar 13 11:09:57.371767 master-0 kubenswrapper[33013]: I0313 11:09:57.369705 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl6cb\" (UniqueName: \"kubernetes.io/projected/5771aaff-3b51-43f6-886d-8c9beb93d212-kube-api-access-gl6cb\") pod \"placement-operator-controller-manager-574d45c66c-ckxss\" (UID: \"5771aaff-3b51-43f6-886d-8c9beb93d212\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-ckxss" Mar 13 11:09:57.371767 master-0 kubenswrapper[33013]: I0313 11:09:57.370883 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t4bv\" (UniqueName: \"kubernetes.io/projected/f5eb3916-1c7f-434f-a559-141a2b52d2c3-kube-api-access-6t4bv\") pod \"swift-operator-controller-manager-677c674df7-ss27q\" (UID: \"f5eb3916-1c7f-434f-a559-141a2b52d2c3\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-ss27q" Mar 13 11:09:57.378650 master-0 kubenswrapper[33013]: I0313 11:09:57.378194 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-srfks" Mar 13 11:09:57.432686 master-0 kubenswrapper[33013]: I0313 11:09:57.425989 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-ss27q" Mar 13 11:09:57.432686 master-0 kubenswrapper[33013]: I0313 11:09:57.426802 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-ckxss" Mar 13 11:09:57.450577 master-0 kubenswrapper[33013]: I0313 11:09:57.447570 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:09:57.450577 master-0 kubenswrapper[33013]: I0313 11:09:57.447816 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4sbc\" (UniqueName: \"kubernetes.io/projected/f6028561-8260-4458-b9e2-16b72191f792-kube-api-access-h4sbc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xsksr\" (UID: \"f6028561-8260-4458-b9e2-16b72191f792\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xsksr" Mar 13 11:09:57.450577 master-0 kubenswrapper[33013]: I0313 11:09:57.447899 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flzp6\" (UniqueName: \"kubernetes.io/projected/43c4e3b3-c529-4106-9948-6c9731fffb4c-kube-api-access-flzp6\") pod \"watcher-operator-controller-manager-6dd88c6f67-b6rqx\" (UID: \"43c4e3b3-c529-4106-9948-6c9731fffb4c\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-b6rqx" Mar 13 11:09:57.450577 master-0 kubenswrapper[33013]: I0313 11:09:57.448024 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jldbv\" (UniqueName: \"kubernetes.io/projected/baede48e-55be-4e10-ad58-d36d1a72d782-kube-api-access-jldbv\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:09:57.450577 master-0 kubenswrapper[33013]: I0313 11:09:57.448054 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:09:57.450577 master-0 kubenswrapper[33013]: I0313 11:09:57.448216 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtbjv\" (UniqueName: \"kubernetes.io/projected/8062a09f-5b4d-4f2c-bf7a-468a9c04f706-kube-api-access-xtbjv\") pod \"test-operator-controller-manager-5c5cb9c4d7-87rhv\" (UID: \"8062a09f-5b4d-4f2c-bf7a-468a9c04f706\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-87rhv" Mar 13 11:09:57.450577 master-0 kubenswrapper[33013]: E0313 11:09:57.448787 33013 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 11:09:57.450577 master-0 kubenswrapper[33013]: E0313 11:09:57.448844 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs podName:baede48e-55be-4e10-ad58-d36d1a72d782 nodeName:}" failed. No retries permitted until 2026-03-13 11:09:57.948822049 +0000 UTC m=+781.424775398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-5tkfn" (UID: "baede48e-55be-4e10-ad58-d36d1a72d782") : secret "metrics-server-cert" not found Mar 13 11:09:57.450577 master-0 kubenswrapper[33013]: E0313 11:09:57.450519 33013 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 11:09:57.450903 master-0 kubenswrapper[33013]: E0313 11:09:57.450650 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs podName:baede48e-55be-4e10-ad58-d36d1a72d782 nodeName:}" failed. No retries permitted until 2026-03-13 11:09:57.95061935 +0000 UTC m=+781.426572689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-5tkfn" (UID: "baede48e-55be-4e10-ad58-d36d1a72d782") : secret "webhook-server-cert" not found Mar 13 11:09:57.461855 master-0 kubenswrapper[33013]: I0313 11:09:57.461576 33013 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 11:09:57.479595 master-0 kubenswrapper[33013]: I0313 11:09:57.479150 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtbjv\" (UniqueName: \"kubernetes.io/projected/8062a09f-5b4d-4f2c-bf7a-468a9c04f706-kube-api-access-xtbjv\") pod \"test-operator-controller-manager-5c5cb9c4d7-87rhv\" (UID: \"8062a09f-5b4d-4f2c-bf7a-468a9c04f706\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-87rhv" Mar 13 11:09:57.482001 master-0 kubenswrapper[33013]: I0313 11:09:57.481932 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jldbv\" (UniqueName: \"kubernetes.io/projected/baede48e-55be-4e10-ad58-d36d1a72d782-kube-api-access-jldbv\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:09:57.485673 master-0 kubenswrapper[33013]: I0313 11:09:57.485559 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flzp6\" (UniqueName: \"kubernetes.io/projected/43c4e3b3-c529-4106-9948-6c9731fffb4c-kube-api-access-flzp6\") pod \"watcher-operator-controller-manager-6dd88c6f67-b6rqx\" (UID: \"43c4e3b3-c529-4106-9948-6c9731fffb4c\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-b6rqx" Mar 13 11:09:57.516993 master-0 kubenswrapper[33013]: I0313 11:09:57.516935 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-b6rqx" Mar 13 11:09:57.561471 master-0 kubenswrapper[33013]: I0313 11:09:57.553510 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4sbc\" (UniqueName: \"kubernetes.io/projected/f6028561-8260-4458-b9e2-16b72191f792-kube-api-access-h4sbc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xsksr\" (UID: \"f6028561-8260-4458-b9e2-16b72191f792\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xsksr" Mar 13 11:09:57.628570 master-0 kubenswrapper[33013]: I0313 11:09:57.621247 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4sbc\" (UniqueName: \"kubernetes.io/projected/f6028561-8260-4458-b9e2-16b72191f792-kube-api-access-h4sbc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xsksr\" (UID: \"f6028561-8260-4458-b9e2-16b72191f792\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xsksr" Mar 13 11:09:57.666999 master-0 kubenswrapper[33013]: I0313 11:09:57.665785 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-pgqh9"] Mar 13 11:09:57.676840 master-0 kubenswrapper[33013]: I0313 11:09:57.671128 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-pgrw9" Mar 13 11:09:57.685612 master-0 kubenswrapper[33013]: I0313 11:09:57.685523 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-xx9n2"] Mar 13 11:09:57.685683 master-0 kubenswrapper[33013]: I0313 11:09:57.685622 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-2t4jt"] Mar 13 11:09:57.729812 master-0 kubenswrapper[33013]: I0313 11:09:57.726723 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-87rhv" Mar 13 11:09:57.764105 master-0 kubenswrapper[33013]: I0313 11:09:57.760353 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk\" (UID: \"07aae105-cbfc-4df6-97ee-2231d7611d03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:09:57.764105 master-0 kubenswrapper[33013]: E0313 11:09:57.760683 33013 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 11:09:57.764105 master-0 kubenswrapper[33013]: E0313 11:09:57.760751 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert podName:07aae105-cbfc-4df6-97ee-2231d7611d03 nodeName:}" failed. No retries permitted until 2026-03-13 11:09:58.760731588 +0000 UTC m=+782.236684937 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" (UID: "07aae105-cbfc-4df6-97ee-2231d7611d03") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 11:09:57.780000 master-0 kubenswrapper[33013]: I0313 11:09:57.779619 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xsksr" Mar 13 11:09:57.959375 master-0 kubenswrapper[33013]: I0313 11:09:57.959333 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-kqwhl"] Mar 13 11:09:57.967760 master-0 kubenswrapper[33013]: I0313 11:09:57.964343 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:09:57.967760 master-0 kubenswrapper[33013]: I0313 11:09:57.964798 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:09:57.967760 master-0 kubenswrapper[33013]: E0313 11:09:57.965126 33013 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 11:09:57.967760 master-0 kubenswrapper[33013]: E0313 11:09:57.965210 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs podName:baede48e-55be-4e10-ad58-d36d1a72d782 nodeName:}" failed. No retries permitted until 2026-03-13 11:09:58.965189856 +0000 UTC m=+782.441143215 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-5tkfn" (UID: "baede48e-55be-4e10-ad58-d36d1a72d782") : secret "metrics-server-cert" not found Mar 13 11:09:57.967760 master-0 kubenswrapper[33013]: E0313 11:09:57.965864 33013 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 11:09:57.967760 master-0 kubenswrapper[33013]: E0313 11:09:57.965934 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs podName:baede48e-55be-4e10-ad58-d36d1a72d782 nodeName:}" failed. No retries permitted until 2026-03-13 11:09:58.965885766 +0000 UTC m=+782.441839115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-5tkfn" (UID: "baede48e-55be-4e10-ad58-d36d1a72d782") : secret "webhook-server-cert" not found Mar 13 11:09:58.064700 master-0 kubenswrapper[33013]: I0313 11:09:58.063888 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-p6pnl"] Mar 13 11:09:58.094947 master-0 kubenswrapper[33013]: I0313 11:09:58.094888 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-6rf77"] Mar 13 11:09:58.178222 master-0 kubenswrapper[33013]: I0313 11:09:58.178165 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-gstmf\" (UID: \"3670abc6-2527-4580-bf31-36cc0294afd4\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:09:58.179168 master-0 kubenswrapper[33013]: E0313 11:09:58.179129 33013 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 11:09:58.179457 master-0 kubenswrapper[33013]: E0313 11:09:58.179440 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert podName:3670abc6-2527-4580-bf31-36cc0294afd4 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:00.179392338 +0000 UTC m=+783.655345707 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert") pod "infra-operator-controller-manager-b8c8d7cc8-gstmf" (UID: "3670abc6-2527-4580-bf31-36cc0294afd4") : secret "infra-operator-webhook-server-cert" not found Mar 13 11:09:58.289871 master-0 kubenswrapper[33013]: I0313 11:09:58.288888 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-2t4jt" event={"ID":"306ace2b-16a3-4373-ba6b-e2ed8f8d9d89","Type":"ContainerStarted","Data":"9fb9709371d366aa907f635521c4cea4d8a125880813b4fb69ed76d7c84aa4f6"} Mar 13 11:09:58.305248 master-0 kubenswrapper[33013]: I0313 11:09:58.305160 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-6rf77" event={"ID":"1dcceaeb-7636-406b-b013-b76f9a71bee7","Type":"ContainerStarted","Data":"6579f5fc48fa8e5fc76d0a7cc32162b622ae585748a56067e1160d34f3d78635"} Mar 13 11:09:58.332859 master-0 kubenswrapper[33013]: I0313 11:09:58.332802 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-p6pnl" event={"ID":"c786eb3c-e9a9-4184-8729-c6f379982a73","Type":"ContainerStarted","Data":"163e75494c3faf0ceae5bf1d1febf93807a2345b8d23e5b73aea4902fca299a6"} Mar 13 11:09:58.361310 master-0 kubenswrapper[33013]: I0313 11:09:58.356985 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-xx9n2" event={"ID":"72eda6f5-7ceb-41a5-a145-c823bf409279","Type":"ContainerStarted","Data":"b1caeddbd189a2f176db1286619acec51bb07261aafa50bd747174c8028edfb3"} Mar 13 11:09:58.384241 master-0 kubenswrapper[33013]: I0313 11:09:58.384185 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-pgqh9" event={"ID":"b3b82413-ba4e-4c33-8b26-b300aef4c26a","Type":"ContainerStarted","Data":"acb876201752df49a22e7b0a1c6dfe4b02bb9629ecb6832f10eb808abb63686e"} Mar 13 11:09:58.406615 master-0 kubenswrapper[33013]: I0313 11:09:58.400501 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kqwhl" event={"ID":"b1a6f67c-186b-46ec-a1c5-284bfed80fca","Type":"ContainerStarted","Data":"d8e80a3a8c6f3c58fc8b7ffc8adeac8f6bc0a7080ff5e7880dd971ad80e6c057"} Mar 13 11:09:58.432639 master-0 kubenswrapper[33013]: I0313 11:09:58.423473 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-qb894"] Mar 13 11:09:58.433841 master-0 kubenswrapper[33013]: W0313 11:09:58.433403 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57e2bcf4_7a93_426b_943c_f5a5b187190d.slice/crio-e293d94969e5ed1bac7cd69ba2cd872a5b432619d35c4cb008093b8e0954bf09 WatchSource:0}: Error finding container e293d94969e5ed1bac7cd69ba2cd872a5b432619d35c4cb008093b8e0954bf09: Status 404 returned error can't find the container with id e293d94969e5ed1bac7cd69ba2cd872a5b432619d35c4cb008093b8e0954bf09 Mar 13 11:09:58.479034 master-0 kubenswrapper[33013]: I0313 11:09:58.475684 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-g2vp6"] Mar 13 11:09:58.580744 master-0 kubenswrapper[33013]: I0313 11:09:58.580506 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-x79wp"] Mar 13 11:09:58.612616 master-0 kubenswrapper[33013]: W0313 11:09:58.605559 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc9351b2_a2f7_40bc_bcdb_b27629a9a77f.slice/crio-fd7384e02f0a524f7f574df1af0d2acf791695708eeedb39dc013826da68c130 WatchSource:0}: Error finding container fd7384e02f0a524f7f574df1af0d2acf791695708eeedb39dc013826da68c130: Status 404 returned error can't find the container with id fd7384e02f0a524f7f574df1af0d2acf791695708eeedb39dc013826da68c130 Mar 13 11:09:58.754780 master-0 kubenswrapper[33013]: W0313 11:09:58.753377 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbca7c27a_ec95_4e7b_8a46_35e6b8cc9f9f.slice/crio-f7e83e5b56c5d0b9c315f7289928327598aa6798d88db94d673dad23815d96ef WatchSource:0}: Error finding container f7e83e5b56c5d0b9c315f7289928327598aa6798d88db94d673dad23815d96ef: Status 404 returned error can't find the container with id f7e83e5b56c5d0b9c315f7289928327598aa6798d88db94d673dad23815d96ef Mar 13 11:09:58.756629 master-0 kubenswrapper[33013]: I0313 11:09:58.756561 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-7vspc"] Mar 13 11:09:58.756693 master-0 kubenswrapper[33013]: I0313 11:09:58.756638 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-c78nz"] Mar 13 11:09:58.819815 master-0 kubenswrapper[33013]: I0313 11:09:58.819723 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk\" (UID: \"07aae105-cbfc-4df6-97ee-2231d7611d03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:09:58.820435 master-0 kubenswrapper[33013]: E0313 11:09:58.820408 33013 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 11:09:58.820496 master-0 kubenswrapper[33013]: E0313 11:09:58.820462 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert podName:07aae105-cbfc-4df6-97ee-2231d7611d03 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:00.820443892 +0000 UTC m=+784.296397241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" (UID: "07aae105-cbfc-4df6-97ee-2231d7611d03") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 11:09:59.026320 master-0 kubenswrapper[33013]: I0313 11:09:59.026163 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:09:59.028741 master-0 kubenswrapper[33013]: I0313 11:09:59.026432 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:09:59.028741 master-0 kubenswrapper[33013]: E0313 11:09:59.026450 33013 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 11:09:59.028741 master-0 kubenswrapper[33013]: E0313 11:09:59.026552 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs podName:baede48e-55be-4e10-ad58-d36d1a72d782 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:01.026525835 +0000 UTC m=+784.502479184 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-5tkfn" (UID: "baede48e-55be-4e10-ad58-d36d1a72d782") : secret "webhook-server-cert" not found Mar 13 11:09:59.028741 master-0 kubenswrapper[33013]: E0313 11:09:59.026662 33013 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 11:09:59.028741 master-0 kubenswrapper[33013]: E0313 11:09:59.026718 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs podName:baede48e-55be-4e10-ad58-d36d1a72d782 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:01.02670235 +0000 UTC m=+784.502655699 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-5tkfn" (UID: "baede48e-55be-4e10-ad58-d36d1a72d782") : secret "metrics-server-cert" not found Mar 13 11:09:59.309695 master-0 kubenswrapper[33013]: I0313 11:09:59.309636 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-b6rqx"] Mar 13 11:09:59.312999 master-0 kubenswrapper[33013]: W0313 11:09:59.312920 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43c4e3b3_c529_4106_9948_6c9731fffb4c.slice/crio-9bfecf1a07f42ed274c2ee8ce40bcfc144ca35a8819b9e529a8d728fb5d79734 WatchSource:0}: Error finding container 9bfecf1a07f42ed274c2ee8ce40bcfc144ca35a8819b9e529a8d728fb5d79734: Status 404 returned error can't find the container with id 9bfecf1a07f42ed274c2ee8ce40bcfc144ca35a8819b9e529a8d728fb5d79734 Mar 13 11:09:59.418130 master-0 kubenswrapper[33013]: I0313 11:09:59.417974 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-g2vp6" event={"ID":"8d2f8f95-c1d5-48d5-a0ea-c172906cbd9f","Type":"ContainerStarted","Data":"2006466b3a78cd083f493d8017d2642dd00fcd37c170b02f7533490e452cfe96"} Mar 13 11:09:59.421966 master-0 kubenswrapper[33013]: I0313 11:09:59.421838 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-c78nz" event={"ID":"bca7c27a-ec95-4e7b-8a46-35e6b8cc9f9f","Type":"ContainerStarted","Data":"f7e83e5b56c5d0b9c315f7289928327598aa6798d88db94d673dad23815d96ef"} Mar 13 11:09:59.424540 master-0 kubenswrapper[33013]: I0313 11:09:59.424454 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-b6rqx" event={"ID":"43c4e3b3-c529-4106-9948-6c9731fffb4c","Type":"ContainerStarted","Data":"9bfecf1a07f42ed274c2ee8ce40bcfc144ca35a8819b9e529a8d728fb5d79734"} Mar 13 11:09:59.427725 master-0 kubenswrapper[33013]: I0313 11:09:59.427676 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7vspc" event={"ID":"bf8f932c-8ce0-43b0-9bdb-c307a671bb43","Type":"ContainerStarted","Data":"3f34a64f28f4b0799dd30f11f50497a10ce440a34cd526b9db3967ed382eeb7b"} Mar 13 11:09:59.433923 master-0 kubenswrapper[33013]: I0313 11:09:59.433521 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qb894" event={"ID":"57e2bcf4-7a93-426b-943c-f5a5b187190d","Type":"ContainerStarted","Data":"e293d94969e5ed1bac7cd69ba2cd872a5b432619d35c4cb008093b8e0954bf09"} Mar 13 11:09:59.437693 master-0 kubenswrapper[33013]: I0313 11:09:59.437631 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-x79wp" event={"ID":"dc9351b2-a2f7-40bc-bcdb-b27629a9a77f","Type":"ContainerStarted","Data":"fd7384e02f0a524f7f574df1af0d2acf791695708eeedb39dc013826da68c130"} Mar 13 11:09:59.498789 master-0 kubenswrapper[33013]: I0313 11:09:59.498155 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-54bbr"] Mar 13 11:09:59.529386 master-0 kubenswrapper[33013]: W0313 11:09:59.529210 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda112961d_e636_441e_b353_0d71f573d7ff.slice/crio-79500859e00e46982fe7f9c5c54d09bb1017fb99c36941f880f9ad0c864bdffb WatchSource:0}: Error finding container 79500859e00e46982fe7f9c5c54d09bb1017fb99c36941f880f9ad0c864bdffb: Status 404 returned error can't find the container with id 79500859e00e46982fe7f9c5c54d09bb1017fb99c36941f880f9ad0c864bdffb Mar 13 11:09:59.555601 master-0 kubenswrapper[33013]: I0313 11:09:59.555518 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-pgrw9"] Mar 13 11:09:59.589340 master-0 kubenswrapper[33013]: W0313 11:09:59.589254 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6028561_8260_4458_b9e2_16b72191f792.slice/crio-63fb3179a861b2744520272701e0c9940f3020d96c1450b96cd88a7b403e7e0e WatchSource:0}: Error finding container 63fb3179a861b2744520272701e0c9940f3020d96c1450b96cd88a7b403e7e0e: Status 404 returned error can't find the container with id 63fb3179a861b2744520272701e0c9940f3020d96c1450b96cd88a7b403e7e0e Mar 13 11:09:59.599685 master-0 kubenswrapper[33013]: I0313 11:09:59.599642 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-d77xf"] Mar 13 11:09:59.601070 master-0 kubenswrapper[33013]: W0313 11:09:59.601021 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc6d57f2_8d08_48fc_ab2c_7e3e8a00560b.slice/crio-c6cfe4c635b97d09ab6196164142c996e7e011c4fb21e628f08644cd41140f20 WatchSource:0}: Error finding container c6cfe4c635b97d09ab6196164142c996e7e011c4fb21e628f08644cd41140f20: Status 404 returned error can't find the container with id c6cfe4c635b97d09ab6196164142c996e7e011c4fb21e628f08644cd41140f20 Mar 13 11:09:59.619942 master-0 kubenswrapper[33013]: I0313 11:09:59.618679 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-ss27q"] Mar 13 11:09:59.628756 master-0 kubenswrapper[33013]: I0313 11:09:59.628708 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-srfks"] Mar 13 11:09:59.636151 master-0 kubenswrapper[33013]: I0313 11:09:59.636081 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xsksr"] Mar 13 11:09:59.648006 master-0 kubenswrapper[33013]: I0313 11:09:59.647947 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-ckxss"] Mar 13 11:09:59.660128 master-0 kubenswrapper[33013]: I0313 11:09:59.660071 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-87rhv"] Mar 13 11:10:00.257259 master-0 kubenswrapper[33013]: I0313 11:10:00.257005 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-gstmf\" (UID: \"3670abc6-2527-4580-bf31-36cc0294afd4\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:10:00.257259 master-0 kubenswrapper[33013]: E0313 11:10:00.257230 33013 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 11:10:00.257870 master-0 kubenswrapper[33013]: E0313 11:10:00.257339 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert podName:3670abc6-2527-4580-bf31-36cc0294afd4 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:04.257310875 +0000 UTC m=+787.733264224 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert") pod "infra-operator-controller-manager-b8c8d7cc8-gstmf" (UID: "3670abc6-2527-4580-bf31-36cc0294afd4") : secret "infra-operator-webhook-server-cert" not found Mar 13 11:10:00.537927 master-0 kubenswrapper[33013]: I0313 11:10:00.537766 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-ckxss" event={"ID":"5771aaff-3b51-43f6-886d-8c9beb93d212","Type":"ContainerStarted","Data":"eb1329836db2374e1e6ef136cfa071720bb9763a6611abdca22d60f1f2a861c9"} Mar 13 11:10:00.540962 master-0 kubenswrapper[33013]: I0313 11:10:00.540843 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-pgrw9" event={"ID":"cc6d57f2-8d08-48fc-ab2c-7e3e8a00560b","Type":"ContainerStarted","Data":"c6cfe4c635b97d09ab6196164142c996e7e011c4fb21e628f08644cd41140f20"} Mar 13 11:10:00.547757 master-0 kubenswrapper[33013]: I0313 11:10:00.547695 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-54bbr" event={"ID":"a112961d-e636-441e-b353-0d71f573d7ff","Type":"ContainerStarted","Data":"79500859e00e46982fe7f9c5c54d09bb1017fb99c36941f880f9ad0c864bdffb"} Mar 13 11:10:00.575653 master-0 kubenswrapper[33013]: I0313 11:10:00.575555 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-d77xf" event={"ID":"ebe5af93-c06f-4bef-83e6-ea978ff533b4","Type":"ContainerStarted","Data":"d2dd49af1a0f48964b0101049c92acd0890da184f3def1d340c6fd3bf3d8a59b"} Mar 13 11:10:00.583818 master-0 kubenswrapper[33013]: I0313 11:10:00.579260 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-87rhv" event={"ID":"8062a09f-5b4d-4f2c-bf7a-468a9c04f706","Type":"ContainerStarted","Data":"1857f177a9e0ae51f843cab8b3bbd3406fa9bdd6acd8693a4d2cdb8bf6056b7b"} Mar 13 11:10:00.607361 master-0 kubenswrapper[33013]: I0313 11:10:00.607118 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-srfks" event={"ID":"c6673eb2-9dc9-48da-b64f-ac7aa72aae98","Type":"ContainerStarted","Data":"f85ce30977492ff411dbf366f0ab845e00d6ca25c9ad7a589baf21a0b34a78d2"} Mar 13 11:10:00.610726 master-0 kubenswrapper[33013]: I0313 11:10:00.610671 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xsksr" event={"ID":"f6028561-8260-4458-b9e2-16b72191f792","Type":"ContainerStarted","Data":"63fb3179a861b2744520272701e0c9940f3020d96c1450b96cd88a7b403e7e0e"} Mar 13 11:10:00.611883 master-0 kubenswrapper[33013]: I0313 11:10:00.611836 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-ss27q" event={"ID":"f5eb3916-1c7f-434f-a559-141a2b52d2c3","Type":"ContainerStarted","Data":"988de0719b791379f7b3aa4fd1bd2df60ba3e140c9a836e185ada57ae975c56d"} Mar 13 11:10:00.932331 master-0 kubenswrapper[33013]: I0313 11:10:00.932221 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk\" (UID: \"07aae105-cbfc-4df6-97ee-2231d7611d03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:10:00.932716 master-0 kubenswrapper[33013]: E0313 11:10:00.932445 33013 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 11:10:00.932716 master-0 kubenswrapper[33013]: E0313 11:10:00.932556 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert podName:07aae105-cbfc-4df6-97ee-2231d7611d03 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:04.932524332 +0000 UTC m=+788.408477681 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" (UID: "07aae105-cbfc-4df6-97ee-2231d7611d03") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 11:10:01.034636 master-0 kubenswrapper[33013]: I0313 11:10:01.034544 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:10:01.034999 master-0 kubenswrapper[33013]: I0313 11:10:01.034764 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:10:01.035370 master-0 kubenswrapper[33013]: E0313 11:10:01.035338 33013 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 11:10:01.035461 master-0 kubenswrapper[33013]: E0313 11:10:01.035424 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs podName:baede48e-55be-4e10-ad58-d36d1a72d782 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:05.035403694 +0000 UTC m=+788.511357043 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-5tkfn" (UID: "baede48e-55be-4e10-ad58-d36d1a72d782") : secret "webhook-server-cert" not found Mar 13 11:10:01.035892 master-0 kubenswrapper[33013]: E0313 11:10:01.035840 33013 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 11:10:01.035974 master-0 kubenswrapper[33013]: E0313 11:10:01.035950 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs podName:baede48e-55be-4e10-ad58-d36d1a72d782 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:05.035925929 +0000 UTC m=+788.511879418 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-5tkfn" (UID: "baede48e-55be-4e10-ad58-d36d1a72d782") : secret "metrics-server-cert" not found Mar 13 11:10:04.345979 master-0 kubenswrapper[33013]: I0313 11:10:04.345908 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-gstmf\" (UID: \"3670abc6-2527-4580-bf31-36cc0294afd4\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:10:04.346780 master-0 kubenswrapper[33013]: E0313 11:10:04.346129 33013 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 11:10:04.346780 master-0 kubenswrapper[33013]: E0313 11:10:04.346254 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert podName:3670abc6-2527-4580-bf31-36cc0294afd4 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:12.346221689 +0000 UTC m=+795.822175028 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert") pod "infra-operator-controller-manager-b8c8d7cc8-gstmf" (UID: "3670abc6-2527-4580-bf31-36cc0294afd4") : secret "infra-operator-webhook-server-cert" not found Mar 13 11:10:04.963999 master-0 kubenswrapper[33013]: I0313 11:10:04.963905 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk\" (UID: \"07aae105-cbfc-4df6-97ee-2231d7611d03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:10:04.964262 master-0 kubenswrapper[33013]: E0313 11:10:04.964134 33013 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 11:10:04.964262 master-0 kubenswrapper[33013]: E0313 11:10:04.964252 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert podName:07aae105-cbfc-4df6-97ee-2231d7611d03 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:12.964225663 +0000 UTC m=+796.440179012 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" (UID: "07aae105-cbfc-4df6-97ee-2231d7611d03") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 13 11:10:05.066515 master-0 kubenswrapper[33013]: I0313 11:10:05.066410 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:10:05.066857 master-0 kubenswrapper[33013]: I0313 11:10:05.066578 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:10:05.066857 master-0 kubenswrapper[33013]: E0313 11:10:05.066710 33013 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 11:10:05.066857 master-0 kubenswrapper[33013]: E0313 11:10:05.066759 33013 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 11:10:05.066857 master-0 kubenswrapper[33013]: E0313 11:10:05.066834 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs podName:baede48e-55be-4e10-ad58-d36d1a72d782 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:13.066801767 +0000 UTC m=+796.542755106 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-5tkfn" (UID: "baede48e-55be-4e10-ad58-d36d1a72d782") : secret "webhook-server-cert" not found Mar 13 11:10:05.066857 master-0 kubenswrapper[33013]: E0313 11:10:05.066859 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs podName:baede48e-55be-4e10-ad58-d36d1a72d782 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:13.066850448 +0000 UTC m=+796.542803797 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-5tkfn" (UID: "baede48e-55be-4e10-ad58-d36d1a72d782") : secret "metrics-server-cert" not found Mar 13 11:10:12.349201 master-0 kubenswrapper[33013]: I0313 11:10:12.349090 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-gstmf\" (UID: \"3670abc6-2527-4580-bf31-36cc0294afd4\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:10:12.349890 master-0 kubenswrapper[33013]: E0313 11:10:12.349290 33013 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 13 11:10:12.349890 master-0 kubenswrapper[33013]: E0313 11:10:12.349370 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert podName:3670abc6-2527-4580-bf31-36cc0294afd4 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:28.349350442 +0000 UTC m=+811.825303791 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert") pod "infra-operator-controller-manager-b8c8d7cc8-gstmf" (UID: "3670abc6-2527-4580-bf31-36cc0294afd4") : secret "infra-operator-webhook-server-cert" not found Mar 13 11:10:12.966265 master-0 kubenswrapper[33013]: I0313 11:10:12.966000 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk\" (UID: \"07aae105-cbfc-4df6-97ee-2231d7611d03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:10:12.993314 master-0 kubenswrapper[33013]: I0313 11:10:12.993214 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07aae105-cbfc-4df6-97ee-2231d7611d03-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk\" (UID: \"07aae105-cbfc-4df6-97ee-2231d7611d03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:10:13.068433 master-0 kubenswrapper[33013]: I0313 11:10:13.068354 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:10:13.068759 master-0 kubenswrapper[33013]: I0313 11:10:13.068525 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:10:13.068759 master-0 kubenswrapper[33013]: E0313 11:10:13.068563 33013 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 13 11:10:13.068759 master-0 kubenswrapper[33013]: E0313 11:10:13.068716 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs podName:baede48e-55be-4e10-ad58-d36d1a72d782 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:29.068695894 +0000 UTC m=+812.544649243 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-5tkfn" (UID: "baede48e-55be-4e10-ad58-d36d1a72d782") : secret "webhook-server-cert" not found Mar 13 11:10:13.069135 master-0 kubenswrapper[33013]: E0313 11:10:13.069037 33013 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 13 11:10:13.069310 master-0 kubenswrapper[33013]: E0313 11:10:13.069197 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs podName:baede48e-55be-4e10-ad58-d36d1a72d782 nodeName:}" failed. No retries permitted until 2026-03-13 11:10:29.069169617 +0000 UTC m=+812.545123096 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-5tkfn" (UID: "baede48e-55be-4e10-ad58-d36d1a72d782") : secret "metrics-server-cert" not found Mar 13 11:10:13.258418 master-0 kubenswrapper[33013]: I0313 11:10:13.258259 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:10:21.575965 master-0 kubenswrapper[33013]: I0313 11:10:21.575903 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk"] Mar 13 11:10:22.021728 master-0 kubenswrapper[33013]: I0313 11:10:22.021662 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7vspc" event={"ID":"bf8f932c-8ce0-43b0-9bdb-c307a671bb43","Type":"ContainerStarted","Data":"e81a682bf39d942ea5008545328ac7c60c6e6d5f68933489133fa6b0c07df3e9"} Mar 13 11:10:22.022065 master-0 kubenswrapper[33013]: I0313 11:10:22.022023 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7vspc" Mar 13 11:10:22.031292 master-0 kubenswrapper[33013]: I0313 11:10:22.031224 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kqwhl" event={"ID":"b1a6f67c-186b-46ec-a1c5-284bfed80fca","Type":"ContainerStarted","Data":"b89e6e7c464f3d3693bba81deb8bf42a9cadda04802cb045d1ea3e40a209cad7"} Mar 13 11:10:22.032038 master-0 kubenswrapper[33013]: I0313 11:10:22.032011 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kqwhl" Mar 13 11:10:22.042699 master-0 kubenswrapper[33013]: I0313 11:10:22.042646 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-54bbr" event={"ID":"a112961d-e636-441e-b353-0d71f573d7ff","Type":"ContainerStarted","Data":"281ded50a354474f18790130669ee23919b391f567ae5f4deecd7b648d9d5021"} Mar 13 11:10:22.044232 master-0 kubenswrapper[33013]: I0313 11:10:22.044213 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-54bbr" Mar 13 11:10:22.062640 master-0 kubenswrapper[33013]: I0313 11:10:22.060730 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-d77xf" event={"ID":"ebe5af93-c06f-4bef-83e6-ea978ff533b4","Type":"ContainerStarted","Data":"37692ad391d435b9a9332270812ebee4987fd855f429264464a7b72c0cdcbac4"} Mar 13 11:10:22.062640 master-0 kubenswrapper[33013]: I0313 11:10:22.062029 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-d77xf" Mar 13 11:10:22.104616 master-0 kubenswrapper[33013]: I0313 11:10:22.102747 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-54bbr" podStartSLOduration=4.729281709 podStartE2EDuration="26.102725097s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:09:59.533506227 +0000 UTC m=+783.009459576" lastFinishedPulling="2026-03-13 11:10:20.906949615 +0000 UTC m=+804.382902964" observedRunningTime="2026-03-13 11:10:22.101649046 +0000 UTC m=+805.577602395" watchObservedRunningTime="2026-03-13 11:10:22.102725097 +0000 UTC m=+805.578678446" Mar 13 11:10:22.105619 master-0 kubenswrapper[33013]: I0313 11:10:22.105540 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-g2vp6" event={"ID":"8d2f8f95-c1d5-48d5-a0ea-c172906cbd9f","Type":"ContainerStarted","Data":"35aceb33755364f0d6d0dc7ec60c1f92fffbea0e85b8d6db970ba21c32f36e2f"} Mar 13 11:10:22.108076 master-0 kubenswrapper[33013]: I0313 11:10:22.106374 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-g2vp6" Mar 13 11:10:22.125037 master-0 kubenswrapper[33013]: I0313 11:10:22.123957 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-p6pnl" event={"ID":"c786eb3c-e9a9-4184-8729-c6f379982a73","Type":"ContainerStarted","Data":"d047c03d76d81a04d835857df680b2690d7c18dd64f21aeeed7849aaf579c3d7"} Mar 13 11:10:22.125037 master-0 kubenswrapper[33013]: I0313 11:10:22.124993 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-p6pnl" Mar 13 11:10:22.132539 master-0 kubenswrapper[33013]: I0313 11:10:22.132447 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7vspc" podStartSLOduration=4.66341504 podStartE2EDuration="26.132421884s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:09:58.747923636 +0000 UTC m=+782.223876985" lastFinishedPulling="2026-03-13 11:10:20.21693048 +0000 UTC m=+803.692883829" observedRunningTime="2026-03-13 11:10:22.050564785 +0000 UTC m=+805.526518134" watchObservedRunningTime="2026-03-13 11:10:22.132421884 +0000 UTC m=+805.608375233" Mar 13 11:10:22.134385 master-0 kubenswrapper[33013]: I0313 11:10:22.134172 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-pgqh9" event={"ID":"b3b82413-ba4e-4c33-8b26-b300aef4c26a","Type":"ContainerStarted","Data":"797ed7cd5ec30090dfecd7daef89fa10d8396e41fc7abfb6e244fddc1a366b7a"} Mar 13 11:10:22.134560 master-0 kubenswrapper[33013]: I0313 11:10:22.134532 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-pgqh9" Mar 13 11:10:22.136762 master-0 kubenswrapper[33013]: I0313 11:10:22.136732 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-c78nz" event={"ID":"bca7c27a-ec95-4e7b-8a46-35e6b8cc9f9f","Type":"ContainerStarted","Data":"cbd28c57a38b896678e66b940be0b1de344425ae73887df0e5b178ed95617c5c"} Mar 13 11:10:22.137737 master-0 kubenswrapper[33013]: I0313 11:10:22.137721 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-c78nz" Mar 13 11:10:22.143926 master-0 kubenswrapper[33013]: I0313 11:10:22.143861 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-6rf77" event={"ID":"1dcceaeb-7636-406b-b013-b76f9a71bee7","Type":"ContainerStarted","Data":"5e6804c6d22f4eb971527ddd5d21ff6a5e75f65046b441e0396ea64e2e59dd0d"} Mar 13 11:10:22.143926 master-0 kubenswrapper[33013]: I0313 11:10:22.143925 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-6rf77" Mar 13 11:10:22.165629 master-0 kubenswrapper[33013]: I0313 11:10:22.163372 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-pgrw9" event={"ID":"cc6d57f2-8d08-48fc-ab2c-7e3e8a00560b","Type":"ContainerStarted","Data":"28af5d5b23e231d0ebaeb0910582c7db4954b3f250ce9ec13ff793e84dfd1bb6"} Mar 13 11:10:22.165629 master-0 kubenswrapper[33013]: I0313 11:10:22.164388 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-pgrw9" Mar 13 11:10:22.169768 master-0 kubenswrapper[33013]: I0313 11:10:22.169679 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kqwhl" podStartSLOduration=4.977984486 podStartE2EDuration="27.169650865s" podCreationTimestamp="2026-03-13 11:09:55 +0000 UTC" firstStartedPulling="2026-03-13 11:09:58.00006516 +0000 UTC m=+781.476018519" lastFinishedPulling="2026-03-13 11:10:20.191731549 +0000 UTC m=+803.667684898" observedRunningTime="2026-03-13 11:10:22.138525217 +0000 UTC m=+805.614478566" watchObservedRunningTime="2026-03-13 11:10:22.169650865 +0000 UTC m=+805.645604214" Mar 13 11:10:22.184871 master-0 kubenswrapper[33013]: I0313 11:10:22.183814 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-b6rqx" event={"ID":"43c4e3b3-c529-4106-9948-6c9731fffb4c","Type":"ContainerStarted","Data":"45c761d082bbfd69550916087685cc60cf4bb81754b3714436496d0b60e1871e"} Mar 13 11:10:22.184871 master-0 kubenswrapper[33013]: I0313 11:10:22.184758 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-b6rqx" Mar 13 11:10:22.197368 master-0 kubenswrapper[33013]: I0313 11:10:22.195846 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-ckxss" Mar 13 11:10:22.212986 master-0 kubenswrapper[33013]: I0313 11:10:22.209578 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-2t4jt" event={"ID":"306ace2b-16a3-4373-ba6b-e2ed8f8d9d89","Type":"ContainerStarted","Data":"df1370a0a8979799da7495aeb5782bca2e542d62de0df8d55911781bb237e957"} Mar 13 11:10:22.212986 master-0 kubenswrapper[33013]: I0313 11:10:22.209712 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-2t4jt" Mar 13 11:10:22.222631 master-0 kubenswrapper[33013]: I0313 11:10:22.220299 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-pgqh9" podStartSLOduration=4.489484875 podStartE2EDuration="27.220269503s" podCreationTimestamp="2026-03-13 11:09:55 +0000 UTC" firstStartedPulling="2026-03-13 11:09:57.462045512 +0000 UTC m=+780.937998861" lastFinishedPulling="2026-03-13 11:10:20.19283014 +0000 UTC m=+803.668783489" observedRunningTime="2026-03-13 11:10:22.219135921 +0000 UTC m=+805.695089270" watchObservedRunningTime="2026-03-13 11:10:22.220269503 +0000 UTC m=+805.696222852" Mar 13 11:10:22.234715 master-0 kubenswrapper[33013]: I0313 11:10:22.234669 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-87rhv" event={"ID":"8062a09f-5b4d-4f2c-bf7a-468a9c04f706","Type":"ContainerStarted","Data":"917dd5b73becdca6ec194c9d79ed73ff823c962cbaac54426a0bbfd9e5211f58"} Mar 13 11:10:22.237768 master-0 kubenswrapper[33013]: I0313 11:10:22.235340 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-87rhv" Mar 13 11:10:22.237768 master-0 kubenswrapper[33013]: I0313 11:10:22.236826 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" event={"ID":"07aae105-cbfc-4df6-97ee-2231d7611d03","Type":"ContainerStarted","Data":"97e71f2888e65b5bb3196e0a3da922672784e48ea37049c69bd7fef138f540e9"} Mar 13 11:10:22.239606 master-0 kubenswrapper[33013]: I0313 11:10:22.238387 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-xx9n2" event={"ID":"72eda6f5-7ceb-41a5-a145-c823bf409279","Type":"ContainerStarted","Data":"ee4ca1696bb9f86237b1751387e196d391bf6cee7b6ff67ffcd73873cc088c96"} Mar 13 11:10:22.239606 master-0 kubenswrapper[33013]: I0313 11:10:22.238992 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-xx9n2" Mar 13 11:10:22.267975 master-0 kubenswrapper[33013]: I0313 11:10:22.267918 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-srfks" event={"ID":"c6673eb2-9dc9-48da-b64f-ac7aa72aae98","Type":"ContainerStarted","Data":"30118f8e2e13dc6a70554206f6ad1aefc2d987ef4aa760fa938351b740f94eeb"} Mar 13 11:10:22.270626 master-0 kubenswrapper[33013]: I0313 11:10:22.268849 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-srfks" Mar 13 11:10:22.273992 master-0 kubenswrapper[33013]: I0313 11:10:22.273943 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xsksr" event={"ID":"f6028561-8260-4458-b9e2-16b72191f792","Type":"ContainerStarted","Data":"5093ea8a35551d8481162cd57e40c6fdda47e056b97290a321cb62428e937dd2"} Mar 13 11:10:22.287611 master-0 kubenswrapper[33013]: I0313 11:10:22.282916 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-ss27q" event={"ID":"f5eb3916-1c7f-434f-a559-141a2b52d2c3","Type":"ContainerStarted","Data":"a90e655a9f6fd7dc4a872e22dd4f04ec17328bc1150870ddff1b6e86b6ff5161"} Mar 13 11:10:22.287611 master-0 kubenswrapper[33013]: I0313 11:10:22.283328 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-p6pnl" podStartSLOduration=4.456408182 podStartE2EDuration="27.283302501s" podCreationTimestamp="2026-03-13 11:09:55 +0000 UTC" firstStartedPulling="2026-03-13 11:09:57.999074802 +0000 UTC m=+781.475028151" lastFinishedPulling="2026-03-13 11:10:20.825969121 +0000 UTC m=+804.301922470" observedRunningTime="2026-03-13 11:10:22.266953989 +0000 UTC m=+805.742907338" watchObservedRunningTime="2026-03-13 11:10:22.283302501 +0000 UTC m=+805.759255860" Mar 13 11:10:22.287611 master-0 kubenswrapper[33013]: I0313 11:10:22.283882 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-ss27q" Mar 13 11:10:22.316627 master-0 kubenswrapper[33013]: I0313 11:10:22.315735 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-g2vp6" podStartSLOduration=3.912899219 podStartE2EDuration="26.315705845s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:09:58.49210536 +0000 UTC m=+781.968058709" lastFinishedPulling="2026-03-13 11:10:20.894911986 +0000 UTC m=+804.370865335" observedRunningTime="2026-03-13 11:10:22.309185321 +0000 UTC m=+805.785138670" watchObservedRunningTime="2026-03-13 11:10:22.315705845 +0000 UTC m=+805.791659184" Mar 13 11:10:22.384638 master-0 kubenswrapper[33013]: I0313 11:10:22.382559 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-c78nz" podStartSLOduration=4.292221629 podStartE2EDuration="26.3825362s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:09:58.774684281 +0000 UTC m=+782.250637630" lastFinishedPulling="2026-03-13 11:10:20.864998852 +0000 UTC m=+804.340952201" observedRunningTime="2026-03-13 11:10:22.371470568 +0000 UTC m=+805.847423917" watchObservedRunningTime="2026-03-13 11:10:22.3825362 +0000 UTC m=+805.858489549" Mar 13 11:10:22.457036 master-0 kubenswrapper[33013]: I0313 11:10:22.455433 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-d77xf" podStartSLOduration=5.277087352 podStartE2EDuration="26.455412236s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:09:59.647690598 +0000 UTC m=+783.123643947" lastFinishedPulling="2026-03-13 11:10:20.826015482 +0000 UTC m=+804.301968831" observedRunningTime="2026-03-13 11:10:22.453876692 +0000 UTC m=+805.929830041" watchObservedRunningTime="2026-03-13 11:10:22.455412236 +0000 UTC m=+805.931365585" Mar 13 11:10:22.681656 master-0 kubenswrapper[33013]: I0313 11:10:22.679011 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-pgrw9" podStartSLOduration=5.376998521 podStartE2EDuration="26.678984443s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:09:59.603159632 +0000 UTC m=+783.079112991" lastFinishedPulling="2026-03-13 11:10:20.905145564 +0000 UTC m=+804.381098913" observedRunningTime="2026-03-13 11:10:22.666906722 +0000 UTC m=+806.142860071" watchObservedRunningTime="2026-03-13 11:10:22.678984443 +0000 UTC m=+806.154937792" Mar 13 11:10:22.828507 master-0 kubenswrapper[33013]: I0313 11:10:22.828413 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-ckxss" podStartSLOduration=5.582547438 podStartE2EDuration="26.828381817s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:09:59.675486752 +0000 UTC m=+783.151440101" lastFinishedPulling="2026-03-13 11:10:20.921321131 +0000 UTC m=+804.397274480" observedRunningTime="2026-03-13 11:10:22.816511422 +0000 UTC m=+806.292464771" watchObservedRunningTime="2026-03-13 11:10:22.828381817 +0000 UTC m=+806.304335166" Mar 13 11:10:23.023606 master-0 kubenswrapper[33013]: I0313 11:10:23.023470 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-6rf77" podStartSLOduration=14.332397157 podStartE2EDuration="28.02344556s" podCreationTimestamp="2026-03-13 11:09:55 +0000 UTC" firstStartedPulling="2026-03-13 11:09:57.999256387 +0000 UTC m=+781.475209736" lastFinishedPulling="2026-03-13 11:10:11.69030478 +0000 UTC m=+795.166258139" observedRunningTime="2026-03-13 11:10:23.020376983 +0000 UTC m=+806.496330332" watchObservedRunningTime="2026-03-13 11:10:23.02344556 +0000 UTC m=+806.499398899" Mar 13 11:10:23.193486 master-0 kubenswrapper[33013]: I0313 11:10:23.193314 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-srfks" podStartSLOduration=6.040525098 podStartE2EDuration="27.193282631s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:09:59.673661601 +0000 UTC m=+783.149614950" lastFinishedPulling="2026-03-13 11:10:20.826419134 +0000 UTC m=+804.302372483" observedRunningTime="2026-03-13 11:10:23.191793669 +0000 UTC m=+806.667747018" watchObservedRunningTime="2026-03-13 11:10:23.193282631 +0000 UTC m=+806.669235980" Mar 13 11:10:23.297777 master-0 kubenswrapper[33013]: I0313 11:10:23.297692 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-677c674df7-ss27q" podStartSLOduration=6.145329984 podStartE2EDuration="27.297674876s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:09:59.674110373 +0000 UTC m=+783.150063722" lastFinishedPulling="2026-03-13 11:10:20.826455265 +0000 UTC m=+804.302408614" observedRunningTime="2026-03-13 11:10:23.296962465 +0000 UTC m=+806.772915814" watchObservedRunningTime="2026-03-13 11:10:23.297674876 +0000 UTC m=+806.773628225" Mar 13 11:10:23.331638 master-0 kubenswrapper[33013]: I0313 11:10:23.329885 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-ckxss" event={"ID":"5771aaff-3b51-43f6-886d-8c9beb93d212","Type":"ContainerStarted","Data":"8af8cc0225fbfeeaf3b7ce6e817ba0b5194277570497dda8e90edf4706770b84"} Mar 13 11:10:23.347627 master-0 kubenswrapper[33013]: I0313 11:10:23.346815 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qb894" event={"ID":"57e2bcf4-7a93-426b-943c-f5a5b187190d","Type":"ContainerStarted","Data":"5fb267a701fa01b8f84d128df3181c170fec358e369f4661ecf8ca2a33c574b2"} Mar 13 11:10:23.350640 master-0 kubenswrapper[33013]: I0313 11:10:23.347933 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qb894" Mar 13 11:10:23.378641 master-0 kubenswrapper[33013]: I0313 11:10:23.376267 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-x79wp" event={"ID":"dc9351b2-a2f7-40bc-bcdb-b27629a9a77f","Type":"ContainerStarted","Data":"92a27dab27c556b9a32822285c9108ee3c155c1c88a1c3bac088b85feea9ca64"} Mar 13 11:10:23.378641 master-0 kubenswrapper[33013]: I0313 11:10:23.377480 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-x79wp" Mar 13 11:10:23.392752 master-0 kubenswrapper[33013]: I0313 11:10:23.392656 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-87rhv" podStartSLOduration=6.159552275 podStartE2EDuration="27.392631024s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:09:59.673006232 +0000 UTC m=+783.148959581" lastFinishedPulling="2026-03-13 11:10:20.906084971 +0000 UTC m=+804.382038330" observedRunningTime="2026-03-13 11:10:23.370067668 +0000 UTC m=+806.846021017" watchObservedRunningTime="2026-03-13 11:10:23.392631024 +0000 UTC m=+806.868584373" Mar 13 11:10:23.881731 master-0 kubenswrapper[33013]: I0313 11:10:23.881491 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xsksr" podStartSLOduration=6.671094746 podStartE2EDuration="27.881471584s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:09:59.616073866 +0000 UTC m=+783.092027215" lastFinishedPulling="2026-03-13 11:10:20.826450704 +0000 UTC m=+804.302404053" observedRunningTime="2026-03-13 11:10:23.867433608 +0000 UTC m=+807.343386947" watchObservedRunningTime="2026-03-13 11:10:23.881471584 +0000 UTC m=+807.357424933" Mar 13 11:10:23.968496 master-0 kubenswrapper[33013]: I0313 11:10:23.968409 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-xx9n2" podStartSLOduration=6.238052781 podStartE2EDuration="28.968388216s" podCreationTimestamp="2026-03-13 11:09:55 +0000 UTC" firstStartedPulling="2026-03-13 11:09:57.461483037 +0000 UTC m=+780.937436386" lastFinishedPulling="2026-03-13 11:10:20.191818472 +0000 UTC m=+803.667771821" observedRunningTime="2026-03-13 11:10:23.966929415 +0000 UTC m=+807.442882764" watchObservedRunningTime="2026-03-13 11:10:23.968388216 +0000 UTC m=+807.444341565" Mar 13 11:10:23.976941 master-0 kubenswrapper[33013]: I0313 11:10:23.976871 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-2t4jt" podStartSLOduration=6.552778159 podStartE2EDuration="28.976852085s" podCreationTimestamp="2026-03-13 11:09:55 +0000 UTC" firstStartedPulling="2026-03-13 11:09:57.767827908 +0000 UTC m=+781.243781257" lastFinishedPulling="2026-03-13 11:10:20.191901824 +0000 UTC m=+803.667855183" observedRunningTime="2026-03-13 11:10:23.919881978 +0000 UTC m=+807.395835327" watchObservedRunningTime="2026-03-13 11:10:23.976852085 +0000 UTC m=+807.452805434" Mar 13 11:10:23.999614 master-0 kubenswrapper[33013]: I0313 11:10:23.998271 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-b6rqx" podStartSLOduration=6.420193359 podStartE2EDuration="27.998253319s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:09:59.316733122 +0000 UTC m=+782.792686471" lastFinishedPulling="2026-03-13 11:10:20.894793082 +0000 UTC m=+804.370746431" observedRunningTime="2026-03-13 11:10:23.996017385 +0000 UTC m=+807.471970734" watchObservedRunningTime="2026-03-13 11:10:23.998253319 +0000 UTC m=+807.474206668" Mar 13 11:10:24.038086 master-0 kubenswrapper[33013]: I0313 11:10:24.037963 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-x79wp" podStartSLOduration=5.561778273 podStartE2EDuration="28.037931348s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:09:58.611013724 +0000 UTC m=+782.086967073" lastFinishedPulling="2026-03-13 11:10:21.087166809 +0000 UTC m=+804.563120148" observedRunningTime="2026-03-13 11:10:24.027893395 +0000 UTC m=+807.503846744" watchObservedRunningTime="2026-03-13 11:10:24.037931348 +0000 UTC m=+807.513884697" Mar 13 11:10:24.066930 master-0 kubenswrapper[33013]: I0313 11:10:24.066471 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qb894" podStartSLOduration=5.533887096 podStartE2EDuration="28.066452622s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:09:58.491839393 +0000 UTC m=+781.967792742" lastFinishedPulling="2026-03-13 11:10:21.024404919 +0000 UTC m=+804.500358268" observedRunningTime="2026-03-13 11:10:24.061216845 +0000 UTC m=+807.537170194" watchObservedRunningTime="2026-03-13 11:10:24.066452622 +0000 UTC m=+807.542405971" Mar 13 11:10:26.260789 master-0 kubenswrapper[33013]: I0313 11:10:26.260604 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-xx9n2" Mar 13 11:10:26.330731 master-0 kubenswrapper[33013]: I0313 11:10:26.330658 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-pgqh9" Mar 13 11:10:26.421425 master-0 kubenswrapper[33013]: I0313 11:10:26.421376 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-2t4jt" Mar 13 11:10:26.425876 master-0 kubenswrapper[33013]: I0313 11:10:26.425800 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:10:26.426035 master-0 kubenswrapper[33013]: I0313 11:10:26.425877 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" event={"ID":"07aae105-cbfc-4df6-97ee-2231d7611d03","Type":"ContainerStarted","Data":"ec6d32532b523fbce1e9cb188394e0ad01c3828a9d0d586a64ff133f11935abb"} Mar 13 11:10:26.502000 master-0 kubenswrapper[33013]: I0313 11:10:26.501904 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" podStartSLOduration=26.240656259 podStartE2EDuration="30.501851653s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="2026-03-13 11:10:21.642263038 +0000 UTC m=+805.118216387" lastFinishedPulling="2026-03-13 11:10:25.903458432 +0000 UTC m=+809.379411781" observedRunningTime="2026-03-13 11:10:26.494486935 +0000 UTC m=+809.970440284" watchObservedRunningTime="2026-03-13 11:10:26.501851653 +0000 UTC m=+809.977805002" Mar 13 11:10:26.624780 master-0 kubenswrapper[33013]: I0313 11:10:26.624699 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-p6pnl" Mar 13 11:10:26.708743 master-0 kubenswrapper[33013]: I0313 11:10:26.708661 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kqwhl" Mar 13 11:10:26.771239 master-0 kubenswrapper[33013]: I0313 11:10:26.770366 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-6rf77" Mar 13 11:10:26.929119 master-0 kubenswrapper[33013]: I0313 11:10:26.928946 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-g2vp6" Mar 13 11:10:27.026307 master-0 kubenswrapper[33013]: I0313 11:10:27.026219 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qb894" Mar 13 11:10:27.087885 master-0 kubenswrapper[33013]: I0313 11:10:27.087803 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-x79wp" Mar 13 11:10:27.158719 master-0 kubenswrapper[33013]: I0313 11:10:27.158342 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7vspc" Mar 13 11:10:27.289120 master-0 kubenswrapper[33013]: I0313 11:10:27.288962 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-c78nz" Mar 13 11:10:27.302540 master-0 kubenswrapper[33013]: I0313 11:10:27.302422 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-d77xf" Mar 13 11:10:27.361235 master-0 kubenswrapper[33013]: I0313 11:10:27.361175 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-54bbr" Mar 13 11:10:27.391675 master-0 kubenswrapper[33013]: I0313 11:10:27.389149 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-srfks" Mar 13 11:10:27.430572 master-0 kubenswrapper[33013]: I0313 11:10:27.429960 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-ckxss" Mar 13 11:10:27.435838 master-0 kubenswrapper[33013]: I0313 11:10:27.435619 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-ss27q" Mar 13 11:10:27.530049 master-0 kubenswrapper[33013]: I0313 11:10:27.529705 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-b6rqx" Mar 13 11:10:27.678348 master-0 kubenswrapper[33013]: I0313 11:10:27.678206 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-pgrw9" Mar 13 11:10:27.737509 master-0 kubenswrapper[33013]: I0313 11:10:27.733530 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-87rhv" Mar 13 11:10:28.412698 master-0 kubenswrapper[33013]: I0313 11:10:28.412625 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-gstmf\" (UID: \"3670abc6-2527-4580-bf31-36cc0294afd4\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:10:28.421496 master-0 kubenswrapper[33013]: I0313 11:10:28.421441 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3670abc6-2527-4580-bf31-36cc0294afd4-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-gstmf\" (UID: \"3670abc6-2527-4580-bf31-36cc0294afd4\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:10:28.642446 master-0 kubenswrapper[33013]: I0313 11:10:28.642354 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:10:29.125578 master-0 kubenswrapper[33013]: I0313 11:10:29.123275 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf"] Mar 13 11:10:29.131623 master-0 kubenswrapper[33013]: I0313 11:10:29.131541 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:10:29.131833 master-0 kubenswrapper[33013]: I0313 11:10:29.131729 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:10:29.136380 master-0 kubenswrapper[33013]: I0313 11:10:29.136327 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:10:29.138412 master-0 kubenswrapper[33013]: I0313 11:10:29.138358 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/baede48e-55be-4e10-ad58-d36d1a72d782-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-5tkfn\" (UID: \"baede48e-55be-4e10-ad58-d36d1a72d782\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:10:29.233945 master-0 kubenswrapper[33013]: I0313 11:10:29.233842 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:10:29.478854 master-0 kubenswrapper[33013]: I0313 11:10:29.478765 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" event={"ID":"3670abc6-2527-4580-bf31-36cc0294afd4","Type":"ContainerStarted","Data":"6c941f16244f9e325506fd718341698df9c6c2f83c48758f4f35010bc6bb6ae3"} Mar 13 11:10:29.757247 master-0 kubenswrapper[33013]: I0313 11:10:29.757009 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn"] Mar 13 11:10:29.758122 master-0 kubenswrapper[33013]: W0313 11:10:29.758046 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaede48e_55be_4e10_ad58_d36d1a72d782.slice/crio-bef226ecf18f5d191447c171f85e51eef2b1b6df273403224006fb8b20f79abb WatchSource:0}: Error finding container bef226ecf18f5d191447c171f85e51eef2b1b6df273403224006fb8b20f79abb: Status 404 returned error can't find the container with id bef226ecf18f5d191447c171f85e51eef2b1b6df273403224006fb8b20f79abb Mar 13 11:10:30.491383 master-0 kubenswrapper[33013]: I0313 11:10:30.491323 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" event={"ID":"baede48e-55be-4e10-ad58-d36d1a72d782","Type":"ContainerStarted","Data":"169203e16f6e75e2865e5897d5c2951bfbefaf28a4f5c385b9b4af4fa97edcf5"} Mar 13 11:10:30.491383 master-0 kubenswrapper[33013]: I0313 11:10:30.491377 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" event={"ID":"baede48e-55be-4e10-ad58-d36d1a72d782","Type":"ContainerStarted","Data":"bef226ecf18f5d191447c171f85e51eef2b1b6df273403224006fb8b20f79abb"} Mar 13 11:10:30.492232 master-0 kubenswrapper[33013]: I0313 11:10:30.491487 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:10:33.267116 master-0 kubenswrapper[33013]: I0313 11:10:33.267043 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-7tdvk" Mar 13 11:10:34.111303 master-0 kubenswrapper[33013]: I0313 11:10:34.111188 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" podStartSLOduration=38.111167717 podStartE2EDuration="38.111167717s" podCreationTimestamp="2026-03-13 11:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:10:30.766275749 +0000 UTC m=+814.242229098" watchObservedRunningTime="2026-03-13 11:10:34.111167717 +0000 UTC m=+817.587121066" Mar 13 11:10:34.549077 master-0 kubenswrapper[33013]: I0313 11:10:34.548954 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" event={"ID":"3670abc6-2527-4580-bf31-36cc0294afd4","Type":"ContainerStarted","Data":"5756acaa01eec482b78e524981b0ef37f9bc79c0fa40407c0fb0b051a2ebfe50"} Mar 13 11:10:34.550194 master-0 kubenswrapper[33013]: I0313 11:10:34.550061 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:10:34.583923 master-0 kubenswrapper[33013]: I0313 11:10:34.583798 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" podStartSLOduration=34.411737768 podStartE2EDuration="39.583770818s" podCreationTimestamp="2026-03-13 11:09:55 +0000 UTC" firstStartedPulling="2026-03-13 11:10:29.11614147 +0000 UTC m=+812.592094819" lastFinishedPulling="2026-03-13 11:10:34.28817453 +0000 UTC m=+817.764127869" observedRunningTime="2026-03-13 11:10:34.573430267 +0000 UTC m=+818.049383636" watchObservedRunningTime="2026-03-13 11:10:34.583770818 +0000 UTC m=+818.059724167" Mar 13 11:10:39.245433 master-0 kubenswrapper[33013]: I0313 11:10:39.245348 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-5tkfn" Mar 13 11:10:48.651353 master-0 kubenswrapper[33013]: I0313 11:10:48.651246 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-gstmf" Mar 13 11:11:28.994016 master-0 kubenswrapper[33013]: I0313 11:11:28.993929 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-2nj9d"] Mar 13 11:11:28.997599 master-0 kubenswrapper[33013]: I0313 11:11:28.997534 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-2nj9d" Mar 13 11:11:29.005146 master-0 kubenswrapper[33013]: I0313 11:11:29.005097 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 13 11:11:29.005425 master-0 kubenswrapper[33013]: I0313 11:11:29.005380 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 13 11:11:29.005546 master-0 kubenswrapper[33013]: I0313 11:11:29.005099 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-2nj9d"] Mar 13 11:11:29.005637 master-0 kubenswrapper[33013]: I0313 11:11:29.005223 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 13 11:11:29.119602 master-0 kubenswrapper[33013]: I0313 11:11:29.119482 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-f6hgp"] Mar 13 11:11:29.121830 master-0 kubenswrapper[33013]: I0313 11:11:29.121792 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" Mar 13 11:11:29.124567 master-0 kubenswrapper[33013]: I0313 11:11:29.124500 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 13 11:11:29.127660 master-0 kubenswrapper[33013]: I0313 11:11:29.127627 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-f6hgp"] Mar 13 11:11:29.156711 master-0 kubenswrapper[33013]: I0313 11:11:29.156653 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4305cf0-1184-4ece-a830-a7997ae253d3-config\") pod \"dnsmasq-dns-685c76cf85-2nj9d\" (UID: \"a4305cf0-1184-4ece-a830-a7997ae253d3\") " pod="openstack/dnsmasq-dns-685c76cf85-2nj9d" Mar 13 11:11:29.157072 master-0 kubenswrapper[33013]: I0313 11:11:29.157054 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjfsh\" (UniqueName: \"kubernetes.io/projected/a4305cf0-1184-4ece-a830-a7997ae253d3-kube-api-access-cjfsh\") pod \"dnsmasq-dns-685c76cf85-2nj9d\" (UID: \"a4305cf0-1184-4ece-a830-a7997ae253d3\") " pod="openstack/dnsmasq-dns-685c76cf85-2nj9d" Mar 13 11:11:29.259017 master-0 kubenswrapper[33013]: I0313 11:11:29.258878 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntpd5\" (UniqueName: \"kubernetes.io/projected/436732f6-4687-4878-a8fb-0c70a9ea7521-kube-api-access-ntpd5\") pod \"dnsmasq-dns-8476fd89bc-f6hgp\" (UID: \"436732f6-4687-4878-a8fb-0c70a9ea7521\") " pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" Mar 13 11:11:29.259417 master-0 kubenswrapper[33013]: I0313 11:11:29.259387 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436732f6-4687-4878-a8fb-0c70a9ea7521-config\") pod \"dnsmasq-dns-8476fd89bc-f6hgp\" (UID: \"436732f6-4687-4878-a8fb-0c70a9ea7521\") " pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" Mar 13 11:11:29.259579 master-0 kubenswrapper[33013]: I0313 11:11:29.259564 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjfsh\" (UniqueName: \"kubernetes.io/projected/a4305cf0-1184-4ece-a830-a7997ae253d3-kube-api-access-cjfsh\") pod \"dnsmasq-dns-685c76cf85-2nj9d\" (UID: \"a4305cf0-1184-4ece-a830-a7997ae253d3\") " pod="openstack/dnsmasq-dns-685c76cf85-2nj9d" Mar 13 11:11:29.259909 master-0 kubenswrapper[33013]: I0313 11:11:29.259894 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4305cf0-1184-4ece-a830-a7997ae253d3-config\") pod \"dnsmasq-dns-685c76cf85-2nj9d\" (UID: \"a4305cf0-1184-4ece-a830-a7997ae253d3\") " pod="openstack/dnsmasq-dns-685c76cf85-2nj9d" Mar 13 11:11:29.260027 master-0 kubenswrapper[33013]: I0313 11:11:29.260013 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/436732f6-4687-4878-a8fb-0c70a9ea7521-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-f6hgp\" (UID: \"436732f6-4687-4878-a8fb-0c70a9ea7521\") " pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" Mar 13 11:11:29.260903 master-0 kubenswrapper[33013]: I0313 11:11:29.260858 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4305cf0-1184-4ece-a830-a7997ae253d3-config\") pod \"dnsmasq-dns-685c76cf85-2nj9d\" (UID: \"a4305cf0-1184-4ece-a830-a7997ae253d3\") " pod="openstack/dnsmasq-dns-685c76cf85-2nj9d" Mar 13 11:11:29.275219 master-0 kubenswrapper[33013]: I0313 11:11:29.275164 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjfsh\" (UniqueName: \"kubernetes.io/projected/a4305cf0-1184-4ece-a830-a7997ae253d3-kube-api-access-cjfsh\") pod \"dnsmasq-dns-685c76cf85-2nj9d\" (UID: \"a4305cf0-1184-4ece-a830-a7997ae253d3\") " pod="openstack/dnsmasq-dns-685c76cf85-2nj9d" Mar 13 11:11:29.361759 master-0 kubenswrapper[33013]: I0313 11:11:29.361687 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/436732f6-4687-4878-a8fb-0c70a9ea7521-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-f6hgp\" (UID: \"436732f6-4687-4878-a8fb-0c70a9ea7521\") " pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" Mar 13 11:11:29.362023 master-0 kubenswrapper[33013]: I0313 11:11:29.361782 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntpd5\" (UniqueName: \"kubernetes.io/projected/436732f6-4687-4878-a8fb-0c70a9ea7521-kube-api-access-ntpd5\") pod \"dnsmasq-dns-8476fd89bc-f6hgp\" (UID: \"436732f6-4687-4878-a8fb-0c70a9ea7521\") " pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" Mar 13 11:11:29.362023 master-0 kubenswrapper[33013]: I0313 11:11:29.361856 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436732f6-4687-4878-a8fb-0c70a9ea7521-config\") pod \"dnsmasq-dns-8476fd89bc-f6hgp\" (UID: \"436732f6-4687-4878-a8fb-0c70a9ea7521\") " pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" Mar 13 11:11:29.363061 master-0 kubenswrapper[33013]: I0313 11:11:29.363020 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436732f6-4687-4878-a8fb-0c70a9ea7521-config\") pod \"dnsmasq-dns-8476fd89bc-f6hgp\" (UID: \"436732f6-4687-4878-a8fb-0c70a9ea7521\") " pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" Mar 13 11:11:29.363724 master-0 kubenswrapper[33013]: I0313 11:11:29.363684 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/436732f6-4687-4878-a8fb-0c70a9ea7521-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-f6hgp\" (UID: \"436732f6-4687-4878-a8fb-0c70a9ea7521\") " pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" Mar 13 11:11:29.377686 master-0 kubenswrapper[33013]: I0313 11:11:29.377624 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-2nj9d" Mar 13 11:11:29.400617 master-0 kubenswrapper[33013]: I0313 11:11:29.398893 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntpd5\" (UniqueName: \"kubernetes.io/projected/436732f6-4687-4878-a8fb-0c70a9ea7521-kube-api-access-ntpd5\") pod \"dnsmasq-dns-8476fd89bc-f6hgp\" (UID: \"436732f6-4687-4878-a8fb-0c70a9ea7521\") " pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" Mar 13 11:11:29.447129 master-0 kubenswrapper[33013]: I0313 11:11:29.442368 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" Mar 13 11:11:29.873906 master-0 kubenswrapper[33013]: I0313 11:11:29.872371 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-2nj9d"] Mar 13 11:11:30.010638 master-0 kubenswrapper[33013]: I0313 11:11:30.010575 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-f6hgp"] Mar 13 11:11:30.012903 master-0 kubenswrapper[33013]: W0313 11:11:30.012860 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod436732f6_4687_4878_a8fb_0c70a9ea7521.slice/crio-2704bc92f6a913a3b9ce2109e6e68406cbb54ffcebfc166a7c8be4c03522a695 WatchSource:0}: Error finding container 2704bc92f6a913a3b9ce2109e6e68406cbb54ffcebfc166a7c8be4c03522a695: Status 404 returned error can't find the container with id 2704bc92f6a913a3b9ce2109e6e68406cbb54ffcebfc166a7c8be4c03522a695 Mar 13 11:11:30.134756 master-0 kubenswrapper[33013]: I0313 11:11:30.134535 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" event={"ID":"436732f6-4687-4878-a8fb-0c70a9ea7521","Type":"ContainerStarted","Data":"2704bc92f6a913a3b9ce2109e6e68406cbb54ffcebfc166a7c8be4c03522a695"} Mar 13 11:11:30.137538 master-0 kubenswrapper[33013]: I0313 11:11:30.137473 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-2nj9d" event={"ID":"a4305cf0-1184-4ece-a830-a7997ae253d3","Type":"ContainerStarted","Data":"a2908e31b4946a33dad687f4962fd98ebfc461edfc32c7f0617c0f826ba41dd0"} Mar 13 11:11:32.302011 master-0 kubenswrapper[33013]: I0313 11:11:32.301959 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-2nj9d"] Mar 13 11:11:32.365182 master-0 kubenswrapper[33013]: I0313 11:11:32.365078 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-z6tq4"] Mar 13 11:11:32.373460 master-0 kubenswrapper[33013]: I0313 11:11:32.373359 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:11:32.446250 master-0 kubenswrapper[33013]: I0313 11:11:32.446138 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5gnl\" (UniqueName: \"kubernetes.io/projected/2ac69375-9dd5-4a86-82d7-bf38b6309480-kube-api-access-b5gnl\") pod \"dnsmasq-dns-586dbdbb8c-z6tq4\" (UID: \"2ac69375-9dd5-4a86-82d7-bf38b6309480\") " pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:11:32.446685 master-0 kubenswrapper[33013]: I0313 11:11:32.446612 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ac69375-9dd5-4a86-82d7-bf38b6309480-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-z6tq4\" (UID: \"2ac69375-9dd5-4a86-82d7-bf38b6309480\") " pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:11:32.448926 master-0 kubenswrapper[33013]: I0313 11:11:32.448883 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac69375-9dd5-4a86-82d7-bf38b6309480-config\") pod \"dnsmasq-dns-586dbdbb8c-z6tq4\" (UID: \"2ac69375-9dd5-4a86-82d7-bf38b6309480\") " pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:11:32.487761 master-0 kubenswrapper[33013]: I0313 11:11:32.485996 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-z6tq4"] Mar 13 11:11:32.553486 master-0 kubenswrapper[33013]: I0313 11:11:32.553318 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ac69375-9dd5-4a86-82d7-bf38b6309480-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-z6tq4\" (UID: \"2ac69375-9dd5-4a86-82d7-bf38b6309480\") " pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:11:32.553486 master-0 kubenswrapper[33013]: I0313 11:11:32.553443 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac69375-9dd5-4a86-82d7-bf38b6309480-config\") pod \"dnsmasq-dns-586dbdbb8c-z6tq4\" (UID: \"2ac69375-9dd5-4a86-82d7-bf38b6309480\") " pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:11:32.553486 master-0 kubenswrapper[33013]: I0313 11:11:32.553477 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5gnl\" (UniqueName: \"kubernetes.io/projected/2ac69375-9dd5-4a86-82d7-bf38b6309480-kube-api-access-b5gnl\") pod \"dnsmasq-dns-586dbdbb8c-z6tq4\" (UID: \"2ac69375-9dd5-4a86-82d7-bf38b6309480\") " pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:11:32.560614 master-0 kubenswrapper[33013]: I0313 11:11:32.557340 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ac69375-9dd5-4a86-82d7-bf38b6309480-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-z6tq4\" (UID: \"2ac69375-9dd5-4a86-82d7-bf38b6309480\") " pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:11:32.560614 master-0 kubenswrapper[33013]: I0313 11:11:32.557429 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac69375-9dd5-4a86-82d7-bf38b6309480-config\") pod \"dnsmasq-dns-586dbdbb8c-z6tq4\" (UID: \"2ac69375-9dd5-4a86-82d7-bf38b6309480\") " pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:11:32.590988 master-0 kubenswrapper[33013]: I0313 11:11:32.590919 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5gnl\" (UniqueName: \"kubernetes.io/projected/2ac69375-9dd5-4a86-82d7-bf38b6309480-kube-api-access-b5gnl\") pod \"dnsmasq-dns-586dbdbb8c-z6tq4\" (UID: \"2ac69375-9dd5-4a86-82d7-bf38b6309480\") " pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:11:32.791342 master-0 kubenswrapper[33013]: I0313 11:11:32.791261 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:11:32.877252 master-0 kubenswrapper[33013]: I0313 11:11:32.875969 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-f6hgp"] Mar 13 11:11:32.884157 master-0 kubenswrapper[33013]: I0313 11:11:32.883739 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8"] Mar 13 11:11:32.889515 master-0 kubenswrapper[33013]: I0313 11:11:32.885880 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:11:32.909508 master-0 kubenswrapper[33013]: I0313 11:11:32.909448 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b30d941-db8d-4248-bf0b-535afba17d11-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-c9cs8\" (UID: \"6b30d941-db8d-4248-bf0b-535afba17d11\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:11:32.909773 master-0 kubenswrapper[33013]: I0313 11:11:32.909698 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsvct\" (UniqueName: \"kubernetes.io/projected/6b30d941-db8d-4248-bf0b-535afba17d11-kube-api-access-zsvct\") pod \"dnsmasq-dns-6ff8fd9d5c-c9cs8\" (UID: \"6b30d941-db8d-4248-bf0b-535afba17d11\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:11:32.909897 master-0 kubenswrapper[33013]: I0313 11:11:32.909871 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b30d941-db8d-4248-bf0b-535afba17d11-config\") pod \"dnsmasq-dns-6ff8fd9d5c-c9cs8\" (UID: \"6b30d941-db8d-4248-bf0b-535afba17d11\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:11:33.009859 master-0 kubenswrapper[33013]: I0313 11:11:33.009804 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8"] Mar 13 11:11:33.011193 master-0 kubenswrapper[33013]: I0313 11:11:33.011147 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsvct\" (UniqueName: \"kubernetes.io/projected/6b30d941-db8d-4248-bf0b-535afba17d11-kube-api-access-zsvct\") pod \"dnsmasq-dns-6ff8fd9d5c-c9cs8\" (UID: \"6b30d941-db8d-4248-bf0b-535afba17d11\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:11:33.011387 master-0 kubenswrapper[33013]: I0313 11:11:33.011371 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b30d941-db8d-4248-bf0b-535afba17d11-config\") pod \"dnsmasq-dns-6ff8fd9d5c-c9cs8\" (UID: \"6b30d941-db8d-4248-bf0b-535afba17d11\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:11:33.011476 master-0 kubenswrapper[33013]: I0313 11:11:33.011464 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b30d941-db8d-4248-bf0b-535afba17d11-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-c9cs8\" (UID: \"6b30d941-db8d-4248-bf0b-535afba17d11\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:11:33.012565 master-0 kubenswrapper[33013]: I0313 11:11:33.012544 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b30d941-db8d-4248-bf0b-535afba17d11-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-c9cs8\" (UID: \"6b30d941-db8d-4248-bf0b-535afba17d11\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:11:33.017390 master-0 kubenswrapper[33013]: I0313 11:11:33.015047 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b30d941-db8d-4248-bf0b-535afba17d11-config\") pod \"dnsmasq-dns-6ff8fd9d5c-c9cs8\" (UID: \"6b30d941-db8d-4248-bf0b-535afba17d11\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:11:33.063480 master-0 kubenswrapper[33013]: I0313 11:11:33.063399 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsvct\" (UniqueName: \"kubernetes.io/projected/6b30d941-db8d-4248-bf0b-535afba17d11-kube-api-access-zsvct\") pod \"dnsmasq-dns-6ff8fd9d5c-c9cs8\" (UID: \"6b30d941-db8d-4248-bf0b-535afba17d11\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:11:33.313881 master-0 kubenswrapper[33013]: I0313 11:11:33.304189 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:11:33.807251 master-0 kubenswrapper[33013]: W0313 11:11:33.807170 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ac69375_9dd5_4a86_82d7_bf38b6309480.slice/crio-3b7cce393b2c51be3cc6ede6910dc785b4e3163f92d5d3332d12353119fbf432 WatchSource:0}: Error finding container 3b7cce393b2c51be3cc6ede6910dc785b4e3163f92d5d3332d12353119fbf432: Status 404 returned error can't find the container with id 3b7cce393b2c51be3cc6ede6910dc785b4e3163f92d5d3332d12353119fbf432 Mar 13 11:11:33.863753 master-0 kubenswrapper[33013]: I0313 11:11:33.863680 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-z6tq4"] Mar 13 11:11:34.179673 master-0 kubenswrapper[33013]: I0313 11:11:34.171836 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8"] Mar 13 11:11:34.408705 master-0 kubenswrapper[33013]: I0313 11:11:34.408572 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" event={"ID":"6b30d941-db8d-4248-bf0b-535afba17d11","Type":"ContainerStarted","Data":"abe218f445a8ecd9585cfc50797b501a13bf7532332b82dda30e4d5ebfde6c69"} Mar 13 11:11:34.417039 master-0 kubenswrapper[33013]: I0313 11:11:34.414822 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" event={"ID":"2ac69375-9dd5-4a86-82d7-bf38b6309480","Type":"ContainerStarted","Data":"3b7cce393b2c51be3cc6ede6910dc785b4e3163f92d5d3332d12353119fbf432"} Mar 13 11:11:36.638019 master-0 kubenswrapper[33013]: I0313 11:11:36.633020 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 13 11:11:36.642667 master-0 kubenswrapper[33013]: I0313 11:11:36.639697 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 13 11:11:36.646060 master-0 kubenswrapper[33013]: I0313 11:11:36.645086 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 13 11:11:36.646060 master-0 kubenswrapper[33013]: I0313 11:11:36.645370 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 13 11:11:36.652825 master-0 kubenswrapper[33013]: I0313 11:11:36.649565 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 13 11:11:36.707346 master-0 kubenswrapper[33013]: I0313 11:11:36.702408 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 13 11:11:36.812631 master-0 kubenswrapper[33013]: I0313 11:11:36.806718 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45t9h\" (UniqueName: \"kubernetes.io/projected/11aa0b00-3b31-411e-bc21-1679ffcbc326-kube-api-access-45t9h\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:36.812631 master-0 kubenswrapper[33013]: I0313 11:11:36.806797 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11aa0b00-3b31-411e-bc21-1679ffcbc326-config-data\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:36.812631 master-0 kubenswrapper[33013]: I0313 11:11:36.806826 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11aa0b00-3b31-411e-bc21-1679ffcbc326-combined-ca-bundle\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:36.812631 master-0 kubenswrapper[33013]: I0313 11:11:36.806875 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/11aa0b00-3b31-411e-bc21-1679ffcbc326-memcached-tls-certs\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:36.812631 master-0 kubenswrapper[33013]: I0313 11:11:36.806900 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11aa0b00-3b31-411e-bc21-1679ffcbc326-kolla-config\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:36.931687 master-0 kubenswrapper[33013]: I0313 11:11:36.931497 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11aa0b00-3b31-411e-bc21-1679ffcbc326-config-data\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:36.931687 master-0 kubenswrapper[33013]: I0313 11:11:36.931613 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11aa0b00-3b31-411e-bc21-1679ffcbc326-combined-ca-bundle\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:36.931972 master-0 kubenswrapper[33013]: I0313 11:11:36.931791 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/11aa0b00-3b31-411e-bc21-1679ffcbc326-memcached-tls-certs\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:36.931972 master-0 kubenswrapper[33013]: I0313 11:11:36.931849 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11aa0b00-3b31-411e-bc21-1679ffcbc326-kolla-config\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:36.932081 master-0 kubenswrapper[33013]: I0313 11:11:36.932048 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45t9h\" (UniqueName: \"kubernetes.io/projected/11aa0b00-3b31-411e-bc21-1679ffcbc326-kube-api-access-45t9h\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:36.933790 master-0 kubenswrapper[33013]: I0313 11:11:36.933761 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11aa0b00-3b31-411e-bc21-1679ffcbc326-config-data\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:36.945540 master-0 kubenswrapper[33013]: I0313 11:11:36.945475 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/11aa0b00-3b31-411e-bc21-1679ffcbc326-memcached-tls-certs\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:36.946416 master-0 kubenswrapper[33013]: I0313 11:11:36.946366 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11aa0b00-3b31-411e-bc21-1679ffcbc326-kolla-config\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:36.948434 master-0 kubenswrapper[33013]: I0313 11:11:36.948286 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11aa0b00-3b31-411e-bc21-1679ffcbc326-combined-ca-bundle\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:36.957662 master-0 kubenswrapper[33013]: I0313 11:11:36.957599 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45t9h\" (UniqueName: \"kubernetes.io/projected/11aa0b00-3b31-411e-bc21-1679ffcbc326-kube-api-access-45t9h\") pod \"memcached-0\" (UID: \"11aa0b00-3b31-411e-bc21-1679ffcbc326\") " pod="openstack/memcached-0" Mar 13 11:11:37.001952 master-0 kubenswrapper[33013]: I0313 11:11:37.001842 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 13 11:11:40.606274 master-0 kubenswrapper[33013]: I0313 11:11:40.606180 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 11:11:40.608722 master-0 kubenswrapper[33013]: I0313 11:11:40.608645 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.629698 master-0 kubenswrapper[33013]: I0313 11:11:40.623873 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 13 11:11:40.629698 master-0 kubenswrapper[33013]: I0313 11:11:40.624160 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 13 11:11:40.629698 master-0 kubenswrapper[33013]: I0313 11:11:40.626219 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 13 11:11:40.629698 master-0 kubenswrapper[33013]: I0313 11:11:40.626551 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 13 11:11:40.629698 master-0 kubenswrapper[33013]: I0313 11:11:40.626706 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 13 11:11:40.629698 master-0 kubenswrapper[33013]: I0313 11:11:40.626824 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 13 11:11:40.629698 master-0 kubenswrapper[33013]: I0313 11:11:40.628451 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 11:11:40.783251 master-0 kubenswrapper[33013]: I0313 11:11:40.783187 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1267f71c-34a8-4904-bfb6-de85ae27cd8a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.783617 master-0 kubenswrapper[33013]: I0313 11:11:40.783483 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1267f71c-34a8-4904-bfb6-de85ae27cd8a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.784339 master-0 kubenswrapper[33013]: I0313 11:11:40.783583 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1267f71c-34a8-4904-bfb6-de85ae27cd8a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.784339 master-0 kubenswrapper[33013]: I0313 11:11:40.783664 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1267f71c-34a8-4904-bfb6-de85ae27cd8a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.784339 master-0 kubenswrapper[33013]: I0313 11:11:40.783687 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1267f71c-34a8-4904-bfb6-de85ae27cd8a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.784339 master-0 kubenswrapper[33013]: I0313 11:11:40.783783 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1267f71c-34a8-4904-bfb6-de85ae27cd8a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.784339 master-0 kubenswrapper[33013]: I0313 11:11:40.783877 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1267f71c-34a8-4904-bfb6-de85ae27cd8a-config-data\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.784339 master-0 kubenswrapper[33013]: I0313 11:11:40.784009 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1267f71c-34a8-4904-bfb6-de85ae27cd8a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.784882 master-0 kubenswrapper[33013]: I0313 11:11:40.784370 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1267f71c-34a8-4904-bfb6-de85ae27cd8a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.784882 master-0 kubenswrapper[33013]: I0313 11:11:40.784530 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76p9n\" (UniqueName: \"kubernetes.io/projected/1267f71c-34a8-4904-bfb6-de85ae27cd8a-kube-api-access-76p9n\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.785541 master-0 kubenswrapper[33013]: I0313 11:11:40.785061 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-53ff8c97-7792-4e00-82c3-c6706a5c8927\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f6df8086-9a45-4cba-a08c-36f514da43eb\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.887531 master-0 kubenswrapper[33013]: I0313 11:11:40.887384 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1267f71c-34a8-4904-bfb6-de85ae27cd8a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.887531 master-0 kubenswrapper[33013]: I0313 11:11:40.887483 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1267f71c-34a8-4904-bfb6-de85ae27cd8a-config-data\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.887531 master-0 kubenswrapper[33013]: I0313 11:11:40.887537 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1267f71c-34a8-4904-bfb6-de85ae27cd8a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.887902 master-0 kubenswrapper[33013]: I0313 11:11:40.887569 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1267f71c-34a8-4904-bfb6-de85ae27cd8a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.887902 master-0 kubenswrapper[33013]: I0313 11:11:40.887870 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76p9n\" (UniqueName: \"kubernetes.io/projected/1267f71c-34a8-4904-bfb6-de85ae27cd8a-kube-api-access-76p9n\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.887995 master-0 kubenswrapper[33013]: I0313 11:11:40.887961 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-53ff8c97-7792-4e00-82c3-c6706a5c8927\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f6df8086-9a45-4cba-a08c-36f514da43eb\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.888054 master-0 kubenswrapper[33013]: I0313 11:11:40.888031 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1267f71c-34a8-4904-bfb6-de85ae27cd8a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.888103 master-0 kubenswrapper[33013]: I0313 11:11:40.888053 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1267f71c-34a8-4904-bfb6-de85ae27cd8a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.888149 master-0 kubenswrapper[33013]: I0313 11:11:40.888115 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1267f71c-34a8-4904-bfb6-de85ae27cd8a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.888196 master-0 kubenswrapper[33013]: I0313 11:11:40.888159 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1267f71c-34a8-4904-bfb6-de85ae27cd8a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.888196 master-0 kubenswrapper[33013]: I0313 11:11:40.888188 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1267f71c-34a8-4904-bfb6-de85ae27cd8a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.888797 master-0 kubenswrapper[33013]: I0313 11:11:40.888771 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1267f71c-34a8-4904-bfb6-de85ae27cd8a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.891677 master-0 kubenswrapper[33013]: I0313 11:11:40.890549 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1267f71c-34a8-4904-bfb6-de85ae27cd8a-config-data\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.891959 master-0 kubenswrapper[33013]: I0313 11:11:40.891908 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1267f71c-34a8-4904-bfb6-de85ae27cd8a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.892963 master-0 kubenswrapper[33013]: I0313 11:11:40.892889 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1267f71c-34a8-4904-bfb6-de85ae27cd8a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.893534 master-0 kubenswrapper[33013]: I0313 11:11:40.893488 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:11:40.893710 master-0 kubenswrapper[33013]: I0313 11:11:40.893535 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-53ff8c97-7792-4e00-82c3-c6706a5c8927\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f6df8086-9a45-4cba-a08c-36f514da43eb\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/3587a971cb283af7f29687de0161d9a31133c753eb8905c5acd40adf98d48e07/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.893710 master-0 kubenswrapper[33013]: I0313 11:11:40.893639 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1267f71c-34a8-4904-bfb6-de85ae27cd8a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.902661 master-0 kubenswrapper[33013]: I0313 11:11:40.896677 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1267f71c-34a8-4904-bfb6-de85ae27cd8a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.902661 master-0 kubenswrapper[33013]: I0313 11:11:40.900383 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1267f71c-34a8-4904-bfb6-de85ae27cd8a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.902661 master-0 kubenswrapper[33013]: I0313 11:11:40.902006 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1267f71c-34a8-4904-bfb6-de85ae27cd8a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.913800 master-0 kubenswrapper[33013]: I0313 11:11:40.913755 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1267f71c-34a8-4904-bfb6-de85ae27cd8a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:40.926933 master-0 kubenswrapper[33013]: I0313 11:11:40.926880 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76p9n\" (UniqueName: \"kubernetes.io/projected/1267f71c-34a8-4904-bfb6-de85ae27cd8a-kube-api-access-76p9n\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:41.338337 master-0 kubenswrapper[33013]: I0313 11:11:41.338233 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 11:11:41.359801 master-0 kubenswrapper[33013]: I0313 11:11:41.359755 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.372620 master-0 kubenswrapper[33013]: I0313 11:11:41.367111 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 13 11:11:41.372620 master-0 kubenswrapper[33013]: I0313 11:11:41.367379 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 13 11:11:41.372620 master-0 kubenswrapper[33013]: I0313 11:11:41.367536 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 13 11:11:41.389007 master-0 kubenswrapper[33013]: I0313 11:11:41.388914 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 13 11:11:41.393211 master-0 kubenswrapper[33013]: I0313 11:11:41.393005 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 13 11:11:41.393445 master-0 kubenswrapper[33013]: I0313 11:11:41.393422 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 13 11:11:41.416796 master-0 kubenswrapper[33013]: I0313 11:11:41.404703 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 11:11:41.427615 master-0 kubenswrapper[33013]: I0313 11:11:41.420289 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a3ecc521-569a-4aca-9e52-6e504c9f96de-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.427615 master-0 kubenswrapper[33013]: I0313 11:11:41.420384 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a3ecc521-569a-4aca-9e52-6e504c9f96de-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.427615 master-0 kubenswrapper[33013]: I0313 11:11:41.420511 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a3ecc521-569a-4aca-9e52-6e504c9f96de-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.427615 master-0 kubenswrapper[33013]: I0313 11:11:41.420552 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a3ecc521-569a-4aca-9e52-6e504c9f96de-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.427615 master-0 kubenswrapper[33013]: I0313 11:11:41.420649 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a3ecc521-569a-4aca-9e52-6e504c9f96de-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.427615 master-0 kubenswrapper[33013]: I0313 11:11:41.420700 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m9pl\" (UniqueName: \"kubernetes.io/projected/a3ecc521-569a-4aca-9e52-6e504c9f96de-kube-api-access-5m9pl\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.427615 master-0 kubenswrapper[33013]: I0313 11:11:41.420819 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a3ecc521-569a-4aca-9e52-6e504c9f96de-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.427615 master-0 kubenswrapper[33013]: I0313 11:11:41.420842 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a3ecc521-569a-4aca-9e52-6e504c9f96de-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.427615 master-0 kubenswrapper[33013]: I0313 11:11:41.420874 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a3ecc521-569a-4aca-9e52-6e504c9f96de-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.427615 master-0 kubenswrapper[33013]: I0313 11:11:41.421098 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3ecc521-569a-4aca-9e52-6e504c9f96de-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.427615 master-0 kubenswrapper[33013]: I0313 11:11:41.421128 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-92d3f51d-de9c-44a5-9d5f-443ab8fd8826\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4d871c84-0154-429f-9677-db43a5011b57\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.522417 master-0 kubenswrapper[33013]: I0313 11:11:41.522368 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3ecc521-569a-4aca-9e52-6e504c9f96de-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.522417 master-0 kubenswrapper[33013]: I0313 11:11:41.522417 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-92d3f51d-de9c-44a5-9d5f-443ab8fd8826\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4d871c84-0154-429f-9677-db43a5011b57\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.522724 master-0 kubenswrapper[33013]: I0313 11:11:41.522466 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a3ecc521-569a-4aca-9e52-6e504c9f96de-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.522724 master-0 kubenswrapper[33013]: I0313 11:11:41.522482 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a3ecc521-569a-4aca-9e52-6e504c9f96de-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.522724 master-0 kubenswrapper[33013]: I0313 11:11:41.522497 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a3ecc521-569a-4aca-9e52-6e504c9f96de-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.522724 master-0 kubenswrapper[33013]: I0313 11:11:41.522517 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a3ecc521-569a-4aca-9e52-6e504c9f96de-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.522724 master-0 kubenswrapper[33013]: I0313 11:11:41.522545 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a3ecc521-569a-4aca-9e52-6e504c9f96de-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.522724 master-0 kubenswrapper[33013]: I0313 11:11:41.522564 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m9pl\" (UniqueName: \"kubernetes.io/projected/a3ecc521-569a-4aca-9e52-6e504c9f96de-kube-api-access-5m9pl\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.522724 master-0 kubenswrapper[33013]: I0313 11:11:41.522629 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a3ecc521-569a-4aca-9e52-6e504c9f96de-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.522724 master-0 kubenswrapper[33013]: I0313 11:11:41.522647 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a3ecc521-569a-4aca-9e52-6e504c9f96de-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.522724 master-0 kubenswrapper[33013]: I0313 11:11:41.522678 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a3ecc521-569a-4aca-9e52-6e504c9f96de-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.524442 master-0 kubenswrapper[33013]: I0313 11:11:41.524405 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3ecc521-569a-4aca-9e52-6e504c9f96de-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.524790 master-0 kubenswrapper[33013]: I0313 11:11:41.524763 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a3ecc521-569a-4aca-9e52-6e504c9f96de-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.524790 master-0 kubenswrapper[33013]: I0313 11:11:41.524773 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a3ecc521-569a-4aca-9e52-6e504c9f96de-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.525070 master-0 kubenswrapper[33013]: I0313 11:11:41.525048 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a3ecc521-569a-4aca-9e52-6e504c9f96de-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.525422 master-0 kubenswrapper[33013]: I0313 11:11:41.525402 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a3ecc521-569a-4aca-9e52-6e504c9f96de-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.526832 master-0 kubenswrapper[33013]: I0313 11:11:41.526498 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:11:41.526832 master-0 kubenswrapper[33013]: I0313 11:11:41.526525 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-92d3f51d-de9c-44a5-9d5f-443ab8fd8826\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4d871c84-0154-429f-9677-db43a5011b57\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/05a32b8b793d4916926782ddaab58d885d0057ddcc1ac589e0128ce4efd6f31b/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.540739 master-0 kubenswrapper[33013]: I0313 11:11:41.540684 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a3ecc521-569a-4aca-9e52-6e504c9f96de-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.542447 master-0 kubenswrapper[33013]: I0313 11:11:41.542161 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a3ecc521-569a-4aca-9e52-6e504c9f96de-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.549105 master-0 kubenswrapper[33013]: I0313 11:11:41.549069 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a3ecc521-569a-4aca-9e52-6e504c9f96de-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.558412 master-0 kubenswrapper[33013]: I0313 11:11:41.558349 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a3ecc521-569a-4aca-9e52-6e504c9f96de-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:41.561527 master-0 kubenswrapper[33013]: I0313 11:11:41.561251 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m9pl\" (UniqueName: \"kubernetes.io/projected/a3ecc521-569a-4aca-9e52-6e504c9f96de-kube-api-access-5m9pl\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:42.457778 master-0 kubenswrapper[33013]: I0313 11:11:42.457715 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gvr6s"] Mar 13 11:11:42.459837 master-0 kubenswrapper[33013]: I0313 11:11:42.459450 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.498622 master-0 kubenswrapper[33013]: I0313 11:11:42.486412 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 13 11:11:42.498622 master-0 kubenswrapper[33013]: I0313 11:11:42.487205 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 13 11:11:42.498622 master-0 kubenswrapper[33013]: I0313 11:11:42.488000 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gvr6s"] Mar 13 11:11:42.498622 master-0 kubenswrapper[33013]: I0313 11:11:42.497532 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 13 11:11:42.526144 master-0 kubenswrapper[33013]: I0313 11:11:42.511449 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 13 11:11:42.526144 master-0 kubenswrapper[33013]: I0313 11:11:42.517017 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 13 11:11:42.526144 master-0 kubenswrapper[33013]: I0313 11:11:42.517343 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 13 11:11:42.526144 master-0 kubenswrapper[33013]: I0313 11:11:42.517484 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 13 11:11:42.611196 master-0 kubenswrapper[33013]: I0313 11:11:42.591894 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 13 11:11:42.697989 master-0 kubenswrapper[33013]: I0313 11:11:42.697826 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df93307e-94fa-45f4-b6b5-5c84b07b116d-var-run-ovn\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.698257 master-0 kubenswrapper[33013]: I0313 11:11:42.698010 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df93307e-94fa-45f4-b6b5-5c84b07b116d-combined-ca-bundle\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.698363 master-0 kubenswrapper[33013]: I0313 11:11:42.698311 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df93307e-94fa-45f4-b6b5-5c84b07b116d-scripts\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.698428 master-0 kubenswrapper[33013]: I0313 11:11:42.698370 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/816f8748-d874-491e-8509-d05a7f0334c6-config-data-default\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.698465 master-0 kubenswrapper[33013]: I0313 11:11:42.698409 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-95ed088c-a277-48e2-8605-34a904b5fb22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5c0f06ac-2a89-474b-a418-25a976482e11\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.698522 master-0 kubenswrapper[33013]: I0313 11:11:42.698494 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r4kk\" (UniqueName: \"kubernetes.io/projected/816f8748-d874-491e-8509-d05a7f0334c6-kube-api-access-4r4kk\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.698630 master-0 kubenswrapper[33013]: I0313 11:11:42.698570 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/816f8748-d874-491e-8509-d05a7f0334c6-kolla-config\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.698744 master-0 kubenswrapper[33013]: I0313 11:11:42.698660 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrwsp\" (UniqueName: \"kubernetes.io/projected/df93307e-94fa-45f4-b6b5-5c84b07b116d-kube-api-access-hrwsp\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.698793 master-0 kubenswrapper[33013]: I0313 11:11:42.698737 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/df93307e-94fa-45f4-b6b5-5c84b07b116d-ovn-controller-tls-certs\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.698890 master-0 kubenswrapper[33013]: I0313 11:11:42.698862 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/816f8748-d874-491e-8509-d05a7f0334c6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.698998 master-0 kubenswrapper[33013]: I0313 11:11:42.698937 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/816f8748-d874-491e-8509-d05a7f0334c6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.699097 master-0 kubenswrapper[33013]: I0313 11:11:42.699052 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df93307e-94fa-45f4-b6b5-5c84b07b116d-var-run\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.699241 master-0 kubenswrapper[33013]: I0313 11:11:42.699199 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/816f8748-d874-491e-8509-d05a7f0334c6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.699327 master-0 kubenswrapper[33013]: I0313 11:11:42.699252 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df93307e-94fa-45f4-b6b5-5c84b07b116d-var-log-ovn\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.699327 master-0 kubenswrapper[33013]: I0313 11:11:42.699294 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/816f8748-d874-491e-8509-d05a7f0334c6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.705032 master-0 kubenswrapper[33013]: I0313 11:11:42.704971 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-53ff8c97-7792-4e00-82c3-c6706a5c8927\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f6df8086-9a45-4cba-a08c-36f514da43eb\") pod \"rabbitmq-server-0\" (UID: \"1267f71c-34a8-4904-bfb6-de85ae27cd8a\") " pod="openstack/rabbitmq-server-0" Mar 13 11:11:42.739299 master-0 kubenswrapper[33013]: I0313 11:11:42.739168 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-b8kgc"] Mar 13 11:11:42.762675 master-0 kubenswrapper[33013]: I0313 11:11:42.762033 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:42.771840 master-0 kubenswrapper[33013]: I0313 11:11:42.771790 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-b8kgc"] Mar 13 11:11:42.772105 master-0 kubenswrapper[33013]: I0313 11:11:42.771988 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 13 11:11:42.836704 master-0 kubenswrapper[33013]: I0313 11:11:42.836557 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df93307e-94fa-45f4-b6b5-5c84b07b116d-scripts\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.836704 master-0 kubenswrapper[33013]: I0313 11:11:42.836686 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/816f8748-d874-491e-8509-d05a7f0334c6-config-data-default\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.837390 master-0 kubenswrapper[33013]: I0313 11:11:42.836727 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-95ed088c-a277-48e2-8605-34a904b5fb22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5c0f06ac-2a89-474b-a418-25a976482e11\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.837390 master-0 kubenswrapper[33013]: I0313 11:11:42.836785 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r4kk\" (UniqueName: \"kubernetes.io/projected/816f8748-d874-491e-8509-d05a7f0334c6-kube-api-access-4r4kk\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.837390 master-0 kubenswrapper[33013]: I0313 11:11:42.836857 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/816f8748-d874-491e-8509-d05a7f0334c6-kolla-config\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.837390 master-0 kubenswrapper[33013]: I0313 11:11:42.836957 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrwsp\" (UniqueName: \"kubernetes.io/projected/df93307e-94fa-45f4-b6b5-5c84b07b116d-kube-api-access-hrwsp\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.837390 master-0 kubenswrapper[33013]: I0313 11:11:42.837024 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/df93307e-94fa-45f4-b6b5-5c84b07b116d-ovn-controller-tls-certs\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.837390 master-0 kubenswrapper[33013]: I0313 11:11:42.837121 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/816f8748-d874-491e-8509-d05a7f0334c6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.837390 master-0 kubenswrapper[33013]: I0313 11:11:42.837163 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/816f8748-d874-491e-8509-d05a7f0334c6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.837390 master-0 kubenswrapper[33013]: I0313 11:11:42.837237 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df93307e-94fa-45f4-b6b5-5c84b07b116d-var-run\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.837390 master-0 kubenswrapper[33013]: I0313 11:11:42.837398 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df93307e-94fa-45f4-b6b5-5c84b07b116d-var-log-ovn\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.837697 master-0 kubenswrapper[33013]: I0313 11:11:42.837427 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/816f8748-d874-491e-8509-d05a7f0334c6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.838409 master-0 kubenswrapper[33013]: I0313 11:11:42.838153 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/816f8748-d874-491e-8509-d05a7f0334c6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.838409 master-0 kubenswrapper[33013]: I0313 11:11:42.838317 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df93307e-94fa-45f4-b6b5-5c84b07b116d-var-run-ovn\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.838409 master-0 kubenswrapper[33013]: I0313 11:11:42.838365 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df93307e-94fa-45f4-b6b5-5c84b07b116d-combined-ca-bundle\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.845630 master-0 kubenswrapper[33013]: I0313 11:11:42.845539 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df93307e-94fa-45f4-b6b5-5c84b07b116d-var-run\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.845958 master-0 kubenswrapper[33013]: I0313 11:11:42.845883 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df93307e-94fa-45f4-b6b5-5c84b07b116d-var-run-ovn\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.846175 master-0 kubenswrapper[33013]: I0313 11:11:42.846107 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df93307e-94fa-45f4-b6b5-5c84b07b116d-var-log-ovn\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.848115 master-0 kubenswrapper[33013]: I0313 11:11:42.848087 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df93307e-94fa-45f4-b6b5-5c84b07b116d-scripts\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.853631 master-0 kubenswrapper[33013]: I0313 11:11:42.853551 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/816f8748-d874-491e-8509-d05a7f0334c6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.853825 master-0 kubenswrapper[33013]: I0313 11:11:42.853707 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/816f8748-d874-491e-8509-d05a7f0334c6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.863715 master-0 kubenswrapper[33013]: I0313 11:11:42.863568 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df93307e-94fa-45f4-b6b5-5c84b07b116d-combined-ca-bundle\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.863975 master-0 kubenswrapper[33013]: I0313 11:11:42.863623 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/816f8748-d874-491e-8509-d05a7f0334c6-kolla-config\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.863975 master-0 kubenswrapper[33013]: I0313 11:11:42.863756 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/df93307e-94fa-45f4-b6b5-5c84b07b116d-ovn-controller-tls-certs\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.867395 master-0 kubenswrapper[33013]: I0313 11:11:42.867166 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/816f8748-d874-491e-8509-d05a7f0334c6-config-data-default\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.879127 master-0 kubenswrapper[33013]: I0313 11:11:42.878215 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:11:42.879127 master-0 kubenswrapper[33013]: I0313 11:11:42.878559 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-95ed088c-a277-48e2-8605-34a904b5fb22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5c0f06ac-2a89-474b-a418-25a976482e11\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/42c3c53fba495d3e386532422529d9c89f66e4556d20b69c11ca23d4748bdef9/globalmount\"" pod="openstack/openstack-galera-0" Mar 13 11:11:42.896176 master-0 kubenswrapper[33013]: I0313 11:11:42.896119 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/816f8748-d874-491e-8509-d05a7f0334c6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.923660 master-0 kubenswrapper[33013]: I0313 11:11:42.923580 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/816f8748-d874-491e-8509-d05a7f0334c6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.924264 master-0 kubenswrapper[33013]: I0313 11:11:42.924221 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrwsp\" (UniqueName: \"kubernetes.io/projected/df93307e-94fa-45f4-b6b5-5c84b07b116d-kube-api-access-hrwsp\") pod \"ovn-controller-gvr6s\" (UID: \"df93307e-94fa-45f4-b6b5-5c84b07b116d\") " pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:42.938111 master-0 kubenswrapper[33013]: I0313 11:11:42.938003 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r4kk\" (UniqueName: \"kubernetes.io/projected/816f8748-d874-491e-8509-d05a7f0334c6-kube-api-access-4r4kk\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:42.957767 master-0 kubenswrapper[33013]: I0313 11:11:42.957670 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/79ae2dd9-ee06-441a-bced-18a3bec394cf-var-log\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:42.958035 master-0 kubenswrapper[33013]: I0313 11:11:42.957824 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/79ae2dd9-ee06-441a-bced-18a3bec394cf-var-run\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:42.958035 master-0 kubenswrapper[33013]: I0313 11:11:42.957854 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlb6l\" (UniqueName: \"kubernetes.io/projected/79ae2dd9-ee06-441a-bced-18a3bec394cf-kube-api-access-wlb6l\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:42.958035 master-0 kubenswrapper[33013]: I0313 11:11:42.957897 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79ae2dd9-ee06-441a-bced-18a3bec394cf-scripts\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:42.958035 master-0 kubenswrapper[33013]: I0313 11:11:42.957953 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/79ae2dd9-ee06-441a-bced-18a3bec394cf-etc-ovs\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:42.958035 master-0 kubenswrapper[33013]: I0313 11:11:42.958002 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/79ae2dd9-ee06-441a-bced-18a3bec394cf-var-lib\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:43.060502 master-0 kubenswrapper[33013]: I0313 11:11:43.059986 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/79ae2dd9-ee06-441a-bced-18a3bec394cf-var-log\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:43.060502 master-0 kubenswrapper[33013]: I0313 11:11:43.060127 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/79ae2dd9-ee06-441a-bced-18a3bec394cf-var-run\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:43.060502 master-0 kubenswrapper[33013]: I0313 11:11:43.060246 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlb6l\" (UniqueName: \"kubernetes.io/projected/79ae2dd9-ee06-441a-bced-18a3bec394cf-kube-api-access-wlb6l\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:43.060502 master-0 kubenswrapper[33013]: I0313 11:11:43.060273 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/79ae2dd9-ee06-441a-bced-18a3bec394cf-var-run\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:43.060502 master-0 kubenswrapper[33013]: I0313 11:11:43.060451 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/79ae2dd9-ee06-441a-bced-18a3bec394cf-var-log\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:43.061093 master-0 kubenswrapper[33013]: I0313 11:11:43.060876 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79ae2dd9-ee06-441a-bced-18a3bec394cf-scripts\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:43.061093 master-0 kubenswrapper[33013]: I0313 11:11:43.060985 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/79ae2dd9-ee06-441a-bced-18a3bec394cf-etc-ovs\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:43.061364 master-0 kubenswrapper[33013]: I0313 11:11:43.061305 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/79ae2dd9-ee06-441a-bced-18a3bec394cf-etc-ovs\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:43.061443 master-0 kubenswrapper[33013]: I0313 11:11:43.061400 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/79ae2dd9-ee06-441a-bced-18a3bec394cf-var-lib\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:43.061751 master-0 kubenswrapper[33013]: I0313 11:11:43.061575 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/79ae2dd9-ee06-441a-bced-18a3bec394cf-var-lib\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:43.075535 master-0 kubenswrapper[33013]: I0313 11:11:43.065994 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79ae2dd9-ee06-441a-bced-18a3bec394cf-scripts\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:43.099065 master-0 kubenswrapper[33013]: I0313 11:11:43.098989 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlb6l\" (UniqueName: \"kubernetes.io/projected/79ae2dd9-ee06-441a-bced-18a3bec394cf-kube-api-access-wlb6l\") pod \"ovn-controller-ovs-b8kgc\" (UID: \"79ae2dd9-ee06-441a-bced-18a3bec394cf\") " pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:43.102405 master-0 kubenswrapper[33013]: I0313 11:11:43.102356 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:11:43.172283 master-0 kubenswrapper[33013]: I0313 11:11:43.172198 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gvr6s" Mar 13 11:11:43.404937 master-0 kubenswrapper[33013]: I0313 11:11:43.404739 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 13 11:11:43.412179 master-0 kubenswrapper[33013]: I0313 11:11:43.412119 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.426425 master-0 kubenswrapper[33013]: I0313 11:11:43.426350 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 13 11:11:43.426425 master-0 kubenswrapper[33013]: I0313 11:11:43.426486 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 13 11:11:43.427916 master-0 kubenswrapper[33013]: I0313 11:11:43.426607 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 13 11:11:43.434690 master-0 kubenswrapper[33013]: I0313 11:11:43.434186 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 13 11:11:43.576093 master-0 kubenswrapper[33013]: I0313 11:11:43.576008 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fjs8\" (UniqueName: \"kubernetes.io/projected/1fa895db-cffa-4a2b-88e0-cd7b59474721-kube-api-access-2fjs8\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.576975 master-0 kubenswrapper[33013]: I0313 11:11:43.576140 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fa895db-cffa-4a2b-88e0-cd7b59474721-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.576975 master-0 kubenswrapper[33013]: I0313 11:11:43.576204 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa895db-cffa-4a2b-88e0-cd7b59474721-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.576975 master-0 kubenswrapper[33013]: I0313 11:11:43.576239 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fa895db-cffa-4a2b-88e0-cd7b59474721-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.576975 master-0 kubenswrapper[33013]: I0313 11:11:43.576277 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7ddad9eb-54d3-404e-8256-f6d996a22dfb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^09c96f37-ea6b-4266-9979-d84b3c421368\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.576975 master-0 kubenswrapper[33013]: I0313 11:11:43.576309 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fa895db-cffa-4a2b-88e0-cd7b59474721-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.576975 master-0 kubenswrapper[33013]: I0313 11:11:43.576337 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa895db-cffa-4a2b-88e0-cd7b59474721-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.576975 master-0 kubenswrapper[33013]: I0313 11:11:43.576410 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fa895db-cffa-4a2b-88e0-cd7b59474721-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.684939 master-0 kubenswrapper[33013]: I0313 11:11:43.681572 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fa895db-cffa-4a2b-88e0-cd7b59474721-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.684939 master-0 kubenswrapper[33013]: I0313 11:11:43.681755 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa895db-cffa-4a2b-88e0-cd7b59474721-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.684939 master-0 kubenswrapper[33013]: I0313 11:11:43.681951 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fa895db-cffa-4a2b-88e0-cd7b59474721-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.684939 master-0 kubenswrapper[33013]: I0313 11:11:43.681996 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7ddad9eb-54d3-404e-8256-f6d996a22dfb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^09c96f37-ea6b-4266-9979-d84b3c421368\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.684939 master-0 kubenswrapper[33013]: I0313 11:11:43.682033 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fa895db-cffa-4a2b-88e0-cd7b59474721-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.684939 master-0 kubenswrapper[33013]: I0313 11:11:43.682069 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa895db-cffa-4a2b-88e0-cd7b59474721-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.684939 master-0 kubenswrapper[33013]: I0313 11:11:43.682106 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fa895db-cffa-4a2b-88e0-cd7b59474721-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.684939 master-0 kubenswrapper[33013]: I0313 11:11:43.682172 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fjs8\" (UniqueName: \"kubernetes.io/projected/1fa895db-cffa-4a2b-88e0-cd7b59474721-kube-api-access-2fjs8\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.684939 master-0 kubenswrapper[33013]: I0313 11:11:43.682703 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fa895db-cffa-4a2b-88e0-cd7b59474721-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.684939 master-0 kubenswrapper[33013]: I0313 11:11:43.684079 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fa895db-cffa-4a2b-88e0-cd7b59474721-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.688136 master-0 kubenswrapper[33013]: I0313 11:11:43.688078 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa895db-cffa-4a2b-88e0-cd7b59474721-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.689070 master-0 kubenswrapper[33013]: I0313 11:11:43.689025 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fa895db-cffa-4a2b-88e0-cd7b59474721-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.689306 master-0 kubenswrapper[33013]: I0313 11:11:43.689269 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:11:43.689564 master-0 kubenswrapper[33013]: I0313 11:11:43.689539 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7ddad9eb-54d3-404e-8256-f6d996a22dfb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^09c96f37-ea6b-4266-9979-d84b3c421368\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/c9ea105a47f00252c3ad0a77870caf4c3fdbc9c665c73e7c2145c0cd6a49c69f/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.689896 master-0 kubenswrapper[33013]: I0313 11:11:43.689848 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa895db-cffa-4a2b-88e0-cd7b59474721-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.701695 master-0 kubenswrapper[33013]: I0313 11:11:43.700790 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fa895db-cffa-4a2b-88e0-cd7b59474721-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:43.709516 master-0 kubenswrapper[33013]: I0313 11:11:43.709460 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fjs8\" (UniqueName: \"kubernetes.io/projected/1fa895db-cffa-4a2b-88e0-cd7b59474721-kube-api-access-2fjs8\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:44.255423 master-0 kubenswrapper[33013]: I0313 11:11:44.255371 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-92d3f51d-de9c-44a5-9d5f-443ab8fd8826\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4d871c84-0154-429f-9677-db43a5011b57\") pod \"rabbitmq-cell1-server-0\" (UID: \"a3ecc521-569a-4aca-9e52-6e504c9f96de\") " pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:44.459918 master-0 kubenswrapper[33013]: I0313 11:11:44.459672 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:11:45.320727 master-0 kubenswrapper[33013]: I0313 11:11:45.320686 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-95ed088c-a277-48e2-8605-34a904b5fb22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5c0f06ac-2a89-474b-a418-25a976482e11\") pod \"openstack-galera-0\" (UID: \"816f8748-d874-491e-8509-d05a7f0334c6\") " pod="openstack/openstack-galera-0" Mar 13 11:11:45.479635 master-0 kubenswrapper[33013]: I0313 11:11:45.478097 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 13 11:11:45.487772 master-0 kubenswrapper[33013]: I0313 11:11:45.486064 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.490150 master-0 kubenswrapper[33013]: I0313 11:11:45.489985 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 13 11:11:45.490272 master-0 kubenswrapper[33013]: I0313 11:11:45.490187 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 13 11:11:45.490329 master-0 kubenswrapper[33013]: I0313 11:11:45.490312 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 13 11:11:45.498664 master-0 kubenswrapper[33013]: I0313 11:11:45.495926 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 13 11:11:45.657626 master-0 kubenswrapper[33013]: I0313 11:11:45.657540 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 13 11:11:45.911959 master-0 kubenswrapper[33013]: I0313 11:11:45.880012 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.911959 master-0 kubenswrapper[33013]: I0313 11:11:45.880082 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b90e4e02-266c-49b7-94cc-93ab3512dfa1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^21cbe44f-cd94-4ae4-8306-d5f2dcb730f2\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.911959 master-0 kubenswrapper[33013]: I0313 11:11:45.880109 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-config\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.911959 master-0 kubenswrapper[33013]: I0313 11:11:45.880159 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h68l\" (UniqueName: \"kubernetes.io/projected/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-kube-api-access-9h68l\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.911959 master-0 kubenswrapper[33013]: I0313 11:11:45.880190 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.911959 master-0 kubenswrapper[33013]: I0313 11:11:45.880248 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.911959 master-0 kubenswrapper[33013]: I0313 11:11:45.880273 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.911959 master-0 kubenswrapper[33013]: I0313 11:11:45.880297 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.911959 master-0 kubenswrapper[33013]: I0313 11:11:45.883329 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 13 11:11:45.986812 master-0 kubenswrapper[33013]: I0313 11:11:45.984795 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.986812 master-0 kubenswrapper[33013]: I0313 11:11:45.984876 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.986812 master-0 kubenswrapper[33013]: I0313 11:11:45.985141 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.986812 master-0 kubenswrapper[33013]: I0313 11:11:45.985481 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.986812 master-0 kubenswrapper[33013]: I0313 11:11:45.985524 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b90e4e02-266c-49b7-94cc-93ab3512dfa1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^21cbe44f-cd94-4ae4-8306-d5f2dcb730f2\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.986812 master-0 kubenswrapper[33013]: I0313 11:11:45.985546 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-config\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.986812 master-0 kubenswrapper[33013]: I0313 11:11:45.985626 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h68l\" (UniqueName: \"kubernetes.io/projected/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-kube-api-access-9h68l\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.986812 master-0 kubenswrapper[33013]: I0313 11:11:45.985660 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.986812 master-0 kubenswrapper[33013]: I0313 11:11:45.986172 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.991532 master-0 kubenswrapper[33013]: I0313 11:11:45.987326 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-config\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.991532 master-0 kubenswrapper[33013]: I0313 11:11:45.990270 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.991532 master-0 kubenswrapper[33013]: I0313 11:11:45.990744 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.991532 master-0 kubenswrapper[33013]: I0313 11:11:45.991498 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:11:45.991532 master-0 kubenswrapper[33013]: I0313 11:11:45.991524 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b90e4e02-266c-49b7-94cc-93ab3512dfa1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^21cbe44f-cd94-4ae4-8306-d5f2dcb730f2\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/239bb25b05450cbfa047156f41cec60ef6eec2492700622f8f8d23448886d9ac/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:45.994186 master-0 kubenswrapper[33013]: I0313 11:11:45.992034 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:46.008491 master-0 kubenswrapper[33013]: I0313 11:11:46.008397 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:46.016370 master-0 kubenswrapper[33013]: I0313 11:11:46.012916 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h68l\" (UniqueName: \"kubernetes.io/projected/6b0ea88a-1819-4ff0-b669-8635de5bf6f8-kube-api-access-9h68l\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:46.456427 master-0 kubenswrapper[33013]: I0313 11:11:46.455377 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7ddad9eb-54d3-404e-8256-f6d996a22dfb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^09c96f37-ea6b-4266-9979-d84b3c421368\") pod \"openstack-cell1-galera-0\" (UID: \"1fa895db-cffa-4a2b-88e0-cd7b59474721\") " pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:46.733684 master-0 kubenswrapper[33013]: I0313 11:11:46.733148 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 13 11:11:47.780369 master-0 kubenswrapper[33013]: I0313 11:11:47.780266 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b90e4e02-266c-49b7-94cc-93ab3512dfa1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^21cbe44f-cd94-4ae4-8306-d5f2dcb730f2\") pod \"ovsdbserver-nb-0\" (UID: \"6b0ea88a-1819-4ff0-b669-8635de5bf6f8\") " pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:47.939090 master-0 kubenswrapper[33013]: I0313 11:11:47.938960 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 13 11:11:48.124092 master-0 kubenswrapper[33013]: I0313 11:11:48.124013 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 13 11:11:48.126970 master-0 kubenswrapper[33013]: I0313 11:11:48.126902 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.133797 master-0 kubenswrapper[33013]: I0313 11:11:48.133734 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 13 11:11:48.142325 master-0 kubenswrapper[33013]: I0313 11:11:48.142264 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 13 11:11:48.146988 master-0 kubenswrapper[33013]: I0313 11:11:48.146949 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 13 11:11:48.148344 master-0 kubenswrapper[33013]: I0313 11:11:48.148323 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 13 11:11:48.241858 master-0 kubenswrapper[33013]: I0313 11:11:48.241684 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33afdb8-ba14-4d4a-9031-63100db5abe1-config\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.242095 master-0 kubenswrapper[33013]: I0313 11:11:48.241864 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a33afdb8-ba14-4d4a-9031-63100db5abe1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.242196 master-0 kubenswrapper[33013]: I0313 11:11:48.242154 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a33afdb8-ba14-4d4a-9031-63100db5abe1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.242251 master-0 kubenswrapper[33013]: I0313 11:11:48.242235 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-464c07f6-1555-4291-8143-11952bb7e216\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b44afcd5-0379-4973-9382-d7ec6b92b57d\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.242314 master-0 kubenswrapper[33013]: I0313 11:11:48.242274 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a33afdb8-ba14-4d4a-9031-63100db5abe1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.242363 master-0 kubenswrapper[33013]: I0313 11:11:48.242317 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a33afdb8-ba14-4d4a-9031-63100db5abe1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.242758 master-0 kubenswrapper[33013]: I0313 11:11:48.242604 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25kdf\" (UniqueName: \"kubernetes.io/projected/a33afdb8-ba14-4d4a-9031-63100db5abe1-kube-api-access-25kdf\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.242758 master-0 kubenswrapper[33013]: I0313 11:11:48.242675 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a33afdb8-ba14-4d4a-9031-63100db5abe1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.344839 master-0 kubenswrapper[33013]: I0313 11:11:48.344777 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25kdf\" (UniqueName: \"kubernetes.io/projected/a33afdb8-ba14-4d4a-9031-63100db5abe1-kube-api-access-25kdf\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.345139 master-0 kubenswrapper[33013]: I0313 11:11:48.344854 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a33afdb8-ba14-4d4a-9031-63100db5abe1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.345139 master-0 kubenswrapper[33013]: I0313 11:11:48.344887 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33afdb8-ba14-4d4a-9031-63100db5abe1-config\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.345139 master-0 kubenswrapper[33013]: I0313 11:11:48.344926 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a33afdb8-ba14-4d4a-9031-63100db5abe1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.345139 master-0 kubenswrapper[33013]: I0313 11:11:48.345003 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a33afdb8-ba14-4d4a-9031-63100db5abe1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.345439 master-0 kubenswrapper[33013]: I0313 11:11:48.345345 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-464c07f6-1555-4291-8143-11952bb7e216\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b44afcd5-0379-4973-9382-d7ec6b92b57d\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.345439 master-0 kubenswrapper[33013]: I0313 11:11:48.345388 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a33afdb8-ba14-4d4a-9031-63100db5abe1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.345439 master-0 kubenswrapper[33013]: I0313 11:11:48.345422 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a33afdb8-ba14-4d4a-9031-63100db5abe1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.345764 master-0 kubenswrapper[33013]: I0313 11:11:48.345702 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a33afdb8-ba14-4d4a-9031-63100db5abe1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.346511 master-0 kubenswrapper[33013]: I0313 11:11:48.346472 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33afdb8-ba14-4d4a-9031-63100db5abe1-config\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.346764 master-0 kubenswrapper[33013]: I0313 11:11:48.346720 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a33afdb8-ba14-4d4a-9031-63100db5abe1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.347276 master-0 kubenswrapper[33013]: I0313 11:11:48.347242 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:11:48.347349 master-0 kubenswrapper[33013]: I0313 11:11:48.347280 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-464c07f6-1555-4291-8143-11952bb7e216\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b44afcd5-0379-4973-9382-d7ec6b92b57d\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/725f39326b189037ba56658991d05a01124d130e5015e5a6ded3704b53b8d978/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.348796 master-0 kubenswrapper[33013]: I0313 11:11:48.348762 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a33afdb8-ba14-4d4a-9031-63100db5abe1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.349563 master-0 kubenswrapper[33013]: I0313 11:11:48.349517 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a33afdb8-ba14-4d4a-9031-63100db5abe1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.371421 master-0 kubenswrapper[33013]: I0313 11:11:48.366923 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a33afdb8-ba14-4d4a-9031-63100db5abe1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:48.380556 master-0 kubenswrapper[33013]: I0313 11:11:48.380498 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25kdf\" (UniqueName: \"kubernetes.io/projected/a33afdb8-ba14-4d4a-9031-63100db5abe1-kube-api-access-25kdf\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:49.695969 master-0 kubenswrapper[33013]: I0313 11:11:49.695905 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-464c07f6-1555-4291-8143-11952bb7e216\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b44afcd5-0379-4973-9382-d7ec6b92b57d\") pod \"ovsdbserver-sb-0\" (UID: \"a33afdb8-ba14-4d4a-9031-63100db5abe1\") " pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:49.700769 master-0 kubenswrapper[33013]: I0313 11:11:49.700700 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 13 11:11:53.692068 master-0 kubenswrapper[33013]: I0313 11:11:53.691949 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-b8kgc"] Mar 13 11:11:53.990696 master-0 kubenswrapper[33013]: W0313 11:11:53.989270 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79ae2dd9_ee06_441a_bced_18a3bec394cf.slice/crio-37068ada8f6547d9be7b0c538cccaa5534ce1e3d0c0b9b8c8727cf3bb8447db4 WatchSource:0}: Error finding container 37068ada8f6547d9be7b0c538cccaa5534ce1e3d0c0b9b8c8727cf3bb8447db4: Status 404 returned error can't find the container with id 37068ada8f6547d9be7b0c538cccaa5534ce1e3d0c0b9b8c8727cf3bb8447db4 Mar 13 11:11:54.219356 master-0 kubenswrapper[33013]: I0313 11:11:54.205829 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b8kgc" event={"ID":"79ae2dd9-ee06-441a-bced-18a3bec394cf","Type":"ContainerStarted","Data":"37068ada8f6547d9be7b0c538cccaa5534ce1e3d0c0b9b8c8727cf3bb8447db4"} Mar 13 11:11:54.517214 master-0 kubenswrapper[33013]: I0313 11:11:54.517160 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 13 11:11:55.146495 master-0 kubenswrapper[33013]: W0313 11:11:55.146451 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod816f8748_d874_491e_8509_d05a7f0334c6.slice/crio-cc280b4908eee5e7286b29ac11f3991efe7aef3b1518b2a13e354037c2d9e748 WatchSource:0}: Error finding container cc280b4908eee5e7286b29ac11f3991efe7aef3b1518b2a13e354037c2d9e748: Status 404 returned error can't find the container with id cc280b4908eee5e7286b29ac11f3991efe7aef3b1518b2a13e354037c2d9e748 Mar 13 11:11:55.152254 master-0 kubenswrapper[33013]: I0313 11:11:55.152201 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gvr6s"] Mar 13 11:11:55.193629 master-0 kubenswrapper[33013]: I0313 11:11:55.178345 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 13 11:11:55.193629 master-0 kubenswrapper[33013]: I0313 11:11:55.188969 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 13 11:11:55.208622 master-0 kubenswrapper[33013]: I0313 11:11:55.198883 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 13 11:11:55.246972 master-0 kubenswrapper[33013]: I0313 11:11:55.246389 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"11aa0b00-3b31-411e-bc21-1679ffcbc326","Type":"ContainerStarted","Data":"2acf36a960286239f0b9621c154b9cb33a2cfe752125a43c7101a52e120fd228"} Mar 13 11:11:55.248657 master-0 kubenswrapper[33013]: I0313 11:11:55.247761 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gvr6s" event={"ID":"df93307e-94fa-45f4-b6b5-5c84b07b116d","Type":"ContainerStarted","Data":"4b1bb5dc9efcc30b19df3cdcdff4d3c8a8b824ef5e00440c1e0b6579d93dded2"} Mar 13 11:11:55.253688 master-0 kubenswrapper[33013]: I0313 11:11:55.253606 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1267f71c-34a8-4904-bfb6-de85ae27cd8a","Type":"ContainerStarted","Data":"112ec695cb1c05815bc8dcece6682e8e2944089f20e01cafde0a88766da13c26"} Mar 13 11:11:55.261322 master-0 kubenswrapper[33013]: I0313 11:11:55.261271 33013 generic.go:334] "Generic (PLEG): container finished" podID="436732f6-4687-4878-a8fb-0c70a9ea7521" containerID="75cadc57803a237457b65ef703ae88bf2de9f47c1ce6a897ee13143835c43b47" exitCode=0 Mar 13 11:11:55.261696 master-0 kubenswrapper[33013]: I0313 11:11:55.261344 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" event={"ID":"436732f6-4687-4878-a8fb-0c70a9ea7521","Type":"ContainerDied","Data":"75cadc57803a237457b65ef703ae88bf2de9f47c1ce6a897ee13143835c43b47"} Mar 13 11:11:55.308125 master-0 kubenswrapper[33013]: I0313 11:11:55.307803 33013 generic.go:334] "Generic (PLEG): container finished" podID="a4305cf0-1184-4ece-a830-a7997ae253d3" containerID="6afee5c3f8027d72002c748b78c55021e63552c4fa15f7edce1fc47276d75dfd" exitCode=0 Mar 13 11:11:55.308125 master-0 kubenswrapper[33013]: I0313 11:11:55.307901 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-2nj9d" event={"ID":"a4305cf0-1184-4ece-a830-a7997ae253d3","Type":"ContainerDied","Data":"6afee5c3f8027d72002c748b78c55021e63552c4fa15f7edce1fc47276d75dfd"} Mar 13 11:11:55.323894 master-0 kubenswrapper[33013]: I0313 11:11:55.323836 33013 generic.go:334] "Generic (PLEG): container finished" podID="2ac69375-9dd5-4a86-82d7-bf38b6309480" containerID="89a77a95a01c8dd5ceb34b189a267699d3e8bc233aab52d27740348a90551393" exitCode=0 Mar 13 11:11:55.324147 master-0 kubenswrapper[33013]: I0313 11:11:55.323922 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" event={"ID":"2ac69375-9dd5-4a86-82d7-bf38b6309480","Type":"ContainerDied","Data":"89a77a95a01c8dd5ceb34b189a267699d3e8bc233aab52d27740348a90551393"} Mar 13 11:11:55.343432 master-0 kubenswrapper[33013]: I0313 11:11:55.343369 33013 generic.go:334] "Generic (PLEG): container finished" podID="6b30d941-db8d-4248-bf0b-535afba17d11" containerID="56eb9ea835de30be92bd119d41700de864b198e7a7bada85c9e96e0e36985ee7" exitCode=0 Mar 13 11:11:55.343808 master-0 kubenswrapper[33013]: I0313 11:11:55.343495 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" event={"ID":"6b30d941-db8d-4248-bf0b-535afba17d11","Type":"ContainerDied","Data":"56eb9ea835de30be92bd119d41700de864b198e7a7bada85c9e96e0e36985ee7"} Mar 13 11:11:55.415466 master-0 kubenswrapper[33013]: I0313 11:11:55.411795 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"816f8748-d874-491e-8509-d05a7f0334c6","Type":"ContainerStarted","Data":"cc280b4908eee5e7286b29ac11f3991efe7aef3b1518b2a13e354037c2d9e748"} Mar 13 11:11:55.451402 master-0 kubenswrapper[33013]: I0313 11:11:55.451060 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1fa895db-cffa-4a2b-88e0-cd7b59474721","Type":"ContainerStarted","Data":"12a27bf5ccba466d83f385b822f5131637fa5ce1e4274bc008adcd69e210c186"} Mar 13 11:11:55.680203 master-0 kubenswrapper[33013]: I0313 11:11:55.680138 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 13 11:11:55.809740 master-0 kubenswrapper[33013]: I0313 11:11:55.809672 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 13 11:11:56.125013 master-0 kubenswrapper[33013]: I0313 11:11:56.122047 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" Mar 13 11:11:56.252084 master-0 kubenswrapper[33013]: I0313 11:11:56.251967 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436732f6-4687-4878-a8fb-0c70a9ea7521-config\") pod \"436732f6-4687-4878-a8fb-0c70a9ea7521\" (UID: \"436732f6-4687-4878-a8fb-0c70a9ea7521\") " Mar 13 11:11:56.252736 master-0 kubenswrapper[33013]: I0313 11:11:56.252704 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntpd5\" (UniqueName: \"kubernetes.io/projected/436732f6-4687-4878-a8fb-0c70a9ea7521-kube-api-access-ntpd5\") pod \"436732f6-4687-4878-a8fb-0c70a9ea7521\" (UID: \"436732f6-4687-4878-a8fb-0c70a9ea7521\") " Mar 13 11:11:56.252992 master-0 kubenswrapper[33013]: I0313 11:11:56.252973 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/436732f6-4687-4878-a8fb-0c70a9ea7521-dns-svc\") pod \"436732f6-4687-4878-a8fb-0c70a9ea7521\" (UID: \"436732f6-4687-4878-a8fb-0c70a9ea7521\") " Mar 13 11:11:56.285577 master-0 kubenswrapper[33013]: I0313 11:11:56.285517 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436732f6-4687-4878-a8fb-0c70a9ea7521-kube-api-access-ntpd5" (OuterVolumeSpecName: "kube-api-access-ntpd5") pod "436732f6-4687-4878-a8fb-0c70a9ea7521" (UID: "436732f6-4687-4878-a8fb-0c70a9ea7521"). InnerVolumeSpecName "kube-api-access-ntpd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:11:56.322688 master-0 kubenswrapper[33013]: I0313 11:11:56.322613 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-2nj9d" Mar 13 11:11:56.336001 master-0 kubenswrapper[33013]: I0313 11:11:56.335904 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436732f6-4687-4878-a8fb-0c70a9ea7521-config" (OuterVolumeSpecName: "config") pod "436732f6-4687-4878-a8fb-0c70a9ea7521" (UID: "436732f6-4687-4878-a8fb-0c70a9ea7521"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:11:56.353081 master-0 kubenswrapper[33013]: I0313 11:11:56.353012 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436732f6-4687-4878-a8fb-0c70a9ea7521-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "436732f6-4687-4878-a8fb-0c70a9ea7521" (UID: "436732f6-4687-4878-a8fb-0c70a9ea7521"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:11:56.362228 master-0 kubenswrapper[33013]: I0313 11:11:56.357693 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntpd5\" (UniqueName: \"kubernetes.io/projected/436732f6-4687-4878-a8fb-0c70a9ea7521-kube-api-access-ntpd5\") on node \"master-0\" DevicePath \"\"" Mar 13 11:11:56.362228 master-0 kubenswrapper[33013]: I0313 11:11:56.357739 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/436732f6-4687-4878-a8fb-0c70a9ea7521-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:11:56.362228 master-0 kubenswrapper[33013]: I0313 11:11:56.357750 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436732f6-4687-4878-a8fb-0c70a9ea7521-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:11:56.369509 master-0 kubenswrapper[33013]: W0313 11:11:56.369211 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b0ea88a_1819_4ff0_b669_8635de5bf6f8.slice/crio-455fc34a6520b783e4dfa3f9769b9cee37aedfdfeb3c7594dbf3b52815ebaf6b WatchSource:0}: Error finding container 455fc34a6520b783e4dfa3f9769b9cee37aedfdfeb3c7594dbf3b52815ebaf6b: Status 404 returned error can't find the container with id 455fc34a6520b783e4dfa3f9769b9cee37aedfdfeb3c7594dbf3b52815ebaf6b Mar 13 11:11:56.373084 master-0 kubenswrapper[33013]: I0313 11:11:56.371944 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 13 11:11:56.459506 master-0 kubenswrapper[33013]: I0313 11:11:56.459436 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4305cf0-1184-4ece-a830-a7997ae253d3-config\") pod \"a4305cf0-1184-4ece-a830-a7997ae253d3\" (UID: \"a4305cf0-1184-4ece-a830-a7997ae253d3\") " Mar 13 11:11:56.459841 master-0 kubenswrapper[33013]: I0313 11:11:56.459692 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjfsh\" (UniqueName: \"kubernetes.io/projected/a4305cf0-1184-4ece-a830-a7997ae253d3-kube-api-access-cjfsh\") pod \"a4305cf0-1184-4ece-a830-a7997ae253d3\" (UID: \"a4305cf0-1184-4ece-a830-a7997ae253d3\") " Mar 13 11:11:56.464294 master-0 kubenswrapper[33013]: I0313 11:11:56.464236 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4305cf0-1184-4ece-a830-a7997ae253d3-kube-api-access-cjfsh" (OuterVolumeSpecName: "kube-api-access-cjfsh") pod "a4305cf0-1184-4ece-a830-a7997ae253d3" (UID: "a4305cf0-1184-4ece-a830-a7997ae253d3"). InnerVolumeSpecName "kube-api-access-cjfsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:11:56.470740 master-0 kubenswrapper[33013]: I0313 11:11:56.470667 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-2nj9d" event={"ID":"a4305cf0-1184-4ece-a830-a7997ae253d3","Type":"ContainerDied","Data":"a2908e31b4946a33dad687f4962fd98ebfc461edfc32c7f0617c0f826ba41dd0"} Mar 13 11:11:56.470740 master-0 kubenswrapper[33013]: I0313 11:11:56.470755 33013 scope.go:117] "RemoveContainer" containerID="6afee5c3f8027d72002c748b78c55021e63552c4fa15f7edce1fc47276d75dfd" Mar 13 11:11:56.470984 master-0 kubenswrapper[33013]: I0313 11:11:56.470851 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-2nj9d" Mar 13 11:11:56.480664 master-0 kubenswrapper[33013]: I0313 11:11:56.480563 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" event={"ID":"2ac69375-9dd5-4a86-82d7-bf38b6309480","Type":"ContainerStarted","Data":"a742da3e75ad98f57d8fca64c8905d359664963407cef76c9cb41dc45da71d3a"} Mar 13 11:11:56.480830 master-0 kubenswrapper[33013]: I0313 11:11:56.480741 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:11:56.488173 master-0 kubenswrapper[33013]: I0313 11:11:56.488103 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a3ecc521-569a-4aca-9e52-6e504c9f96de","Type":"ContainerStarted","Data":"e3753a6559f3d8c6b94c55e9ec7f8de5f0c5d0989e3021fffb1158cea1e4c093"} Mar 13 11:11:56.493271 master-0 kubenswrapper[33013]: I0313 11:11:56.493234 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" event={"ID":"6b30d941-db8d-4248-bf0b-535afba17d11","Type":"ContainerStarted","Data":"d79048ee7852dead8586ec2a42ed2a7c8853ce561a5c70975dd4503ee2b377bc"} Mar 13 11:11:56.493855 master-0 kubenswrapper[33013]: I0313 11:11:56.493805 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:11:56.497965 master-0 kubenswrapper[33013]: I0313 11:11:56.497865 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a33afdb8-ba14-4d4a-9031-63100db5abe1","Type":"ContainerStarted","Data":"d571d4b566d838e6616ea6274888634e9604a10e249d646738eea83f0cc1670b"} Mar 13 11:11:56.505113 master-0 kubenswrapper[33013]: I0313 11:11:56.504094 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" event={"ID":"436732f6-4687-4878-a8fb-0c70a9ea7521","Type":"ContainerDied","Data":"2704bc92f6a913a3b9ce2109e6e68406cbb54ffcebfc166a7c8be4c03522a695"} Mar 13 11:11:56.505113 master-0 kubenswrapper[33013]: I0313 11:11:56.504117 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-f6hgp" Mar 13 11:11:56.506697 master-0 kubenswrapper[33013]: I0313 11:11:56.506634 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6b0ea88a-1819-4ff0-b669-8635de5bf6f8","Type":"ContainerStarted","Data":"455fc34a6520b783e4dfa3f9769b9cee37aedfdfeb3c7594dbf3b52815ebaf6b"} Mar 13 11:11:56.515936 master-0 kubenswrapper[33013]: I0313 11:11:56.515879 33013 scope.go:117] "RemoveContainer" containerID="75cadc57803a237457b65ef703ae88bf2de9f47c1ce6a897ee13143835c43b47" Mar 13 11:11:56.518721 master-0 kubenswrapper[33013]: I0313 11:11:56.518617 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4305cf0-1184-4ece-a830-a7997ae253d3-config" (OuterVolumeSpecName: "config") pod "a4305cf0-1184-4ece-a830-a7997ae253d3" (UID: "a4305cf0-1184-4ece-a830-a7997ae253d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:11:56.534311 master-0 kubenswrapper[33013]: I0313 11:11:56.534179 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" podStartSLOduration=3.9142550419999997 podStartE2EDuration="24.534093572s" podCreationTimestamp="2026-03-13 11:11:32 +0000 UTC" firstStartedPulling="2026-03-13 11:11:33.814114199 +0000 UTC m=+877.290067548" lastFinishedPulling="2026-03-13 11:11:54.433952729 +0000 UTC m=+897.909906078" observedRunningTime="2026-03-13 11:11:56.49893208 +0000 UTC m=+899.974885429" watchObservedRunningTime="2026-03-13 11:11:56.534093572 +0000 UTC m=+900.010046921" Mar 13 11:11:56.543455 master-0 kubenswrapper[33013]: I0313 11:11:56.543360 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" podStartSLOduration=4.450868851 podStartE2EDuration="24.543337093s" podCreationTimestamp="2026-03-13 11:11:32 +0000 UTC" firstStartedPulling="2026-03-13 11:11:34.222561982 +0000 UTC m=+877.698515331" lastFinishedPulling="2026-03-13 11:11:54.315030224 +0000 UTC m=+897.790983573" observedRunningTime="2026-03-13 11:11:56.527480116 +0000 UTC m=+900.003433465" watchObservedRunningTime="2026-03-13 11:11:56.543337093 +0000 UTC m=+900.019290442" Mar 13 11:11:56.562906 master-0 kubenswrapper[33013]: I0313 11:11:56.562773 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjfsh\" (UniqueName: \"kubernetes.io/projected/a4305cf0-1184-4ece-a830-a7997ae253d3-kube-api-access-cjfsh\") on node \"master-0\" DevicePath \"\"" Mar 13 11:11:56.562906 master-0 kubenswrapper[33013]: I0313 11:11:56.562887 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4305cf0-1184-4ece-a830-a7997ae253d3-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:11:56.630748 master-0 kubenswrapper[33013]: I0313 11:11:56.630693 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-f6hgp"] Mar 13 11:11:56.644601 master-0 kubenswrapper[33013]: I0313 11:11:56.644523 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-f6hgp"] Mar 13 11:11:56.736972 master-0 kubenswrapper[33013]: I0313 11:11:56.736918 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="436732f6-4687-4878-a8fb-0c70a9ea7521" path="/var/lib/kubelet/pods/436732f6-4687-4878-a8fb-0c70a9ea7521/volumes" Mar 13 11:11:57.176457 master-0 kubenswrapper[33013]: I0313 11:11:57.176392 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-2nj9d"] Mar 13 11:11:57.205068 master-0 kubenswrapper[33013]: I0313 11:11:57.204947 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-2nj9d"] Mar 13 11:11:57.766519 master-0 kubenswrapper[33013]: I0313 11:11:57.766458 33013 trace.go:236] Trace[390618194]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (13-Mar-2026 11:11:56.706) (total time: 1059ms): Mar 13 11:11:57.766519 master-0 kubenswrapper[33013]: Trace[390618194]: [1.059442456s] [1.059442456s] END Mar 13 11:11:58.729762 master-0 kubenswrapper[33013]: I0313 11:11:58.729673 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4305cf0-1184-4ece-a830-a7997ae253d3" path="/var/lib/kubelet/pods/a4305cf0-1184-4ece-a830-a7997ae253d3/volumes" Mar 13 11:12:02.799025 master-0 kubenswrapper[33013]: I0313 11:12:02.798735 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:12:03.308322 master-0 kubenswrapper[33013]: I0313 11:12:03.308222 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:12:03.788208 master-0 kubenswrapper[33013]: I0313 11:12:03.786098 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-z6tq4"] Mar 13 11:12:03.788208 master-0 kubenswrapper[33013]: I0313 11:12:03.786979 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" podUID="2ac69375-9dd5-4a86-82d7-bf38b6309480" containerName="dnsmasq-dns" containerID="cri-o://a742da3e75ad98f57d8fca64c8905d359664963407cef76c9cb41dc45da71d3a" gracePeriod=10 Mar 13 11:12:03.930772 master-0 kubenswrapper[33013]: E0313 11:12:03.930635 33013 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ac69375_9dd5_4a86_82d7_bf38b6309480.slice/crio-a742da3e75ad98f57d8fca64c8905d359664963407cef76c9cb41dc45da71d3a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ac69375_9dd5_4a86_82d7_bf38b6309480.slice/crio-conmon-a742da3e75ad98f57d8fca64c8905d359664963407cef76c9cb41dc45da71d3a.scope\": RecentStats: unable to find data in memory cache]" Mar 13 11:12:04.678020 master-0 kubenswrapper[33013]: I0313 11:12:04.670302 33013 generic.go:334] "Generic (PLEG): container finished" podID="2ac69375-9dd5-4a86-82d7-bf38b6309480" containerID="a742da3e75ad98f57d8fca64c8905d359664963407cef76c9cb41dc45da71d3a" exitCode=0 Mar 13 11:12:04.678020 master-0 kubenswrapper[33013]: I0313 11:12:04.670379 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" event={"ID":"2ac69375-9dd5-4a86-82d7-bf38b6309480","Type":"ContainerDied","Data":"a742da3e75ad98f57d8fca64c8905d359664963407cef76c9cb41dc45da71d3a"} Mar 13 11:12:04.678020 master-0 kubenswrapper[33013]: I0313 11:12:04.675136 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 13 11:12:04.701719 master-0 kubenswrapper[33013]: I0313 11:12:04.701641 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=18.987250967 podStartE2EDuration="28.701617832s" podCreationTimestamp="2026-03-13 11:11:36 +0000 UTC" firstStartedPulling="2026-03-13 11:11:54.579502185 +0000 UTC m=+898.055455534" lastFinishedPulling="2026-03-13 11:12:04.29386905 +0000 UTC m=+907.769822399" observedRunningTime="2026-03-13 11:12:04.700722727 +0000 UTC m=+908.176676086" watchObservedRunningTime="2026-03-13 11:12:04.701617832 +0000 UTC m=+908.177571171" Mar 13 11:12:04.956096 master-0 kubenswrapper[33013]: I0313 11:12:04.956054 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:12:05.149907 master-0 kubenswrapper[33013]: I0313 11:12:05.149758 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac69375-9dd5-4a86-82d7-bf38b6309480-config\") pod \"2ac69375-9dd5-4a86-82d7-bf38b6309480\" (UID: \"2ac69375-9dd5-4a86-82d7-bf38b6309480\") " Mar 13 11:12:05.149907 master-0 kubenswrapper[33013]: I0313 11:12:05.149852 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5gnl\" (UniqueName: \"kubernetes.io/projected/2ac69375-9dd5-4a86-82d7-bf38b6309480-kube-api-access-b5gnl\") pod \"2ac69375-9dd5-4a86-82d7-bf38b6309480\" (UID: \"2ac69375-9dd5-4a86-82d7-bf38b6309480\") " Mar 13 11:12:05.150162 master-0 kubenswrapper[33013]: I0313 11:12:05.149938 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ac69375-9dd5-4a86-82d7-bf38b6309480-dns-svc\") pod \"2ac69375-9dd5-4a86-82d7-bf38b6309480\" (UID: \"2ac69375-9dd5-4a86-82d7-bf38b6309480\") " Mar 13 11:12:05.245329 master-0 kubenswrapper[33013]: I0313 11:12:05.245261 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac69375-9dd5-4a86-82d7-bf38b6309480-kube-api-access-b5gnl" (OuterVolumeSpecName: "kube-api-access-b5gnl") pod "2ac69375-9dd5-4a86-82d7-bf38b6309480" (UID: "2ac69375-9dd5-4a86-82d7-bf38b6309480"). InnerVolumeSpecName "kube-api-access-b5gnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:05.252116 master-0 kubenswrapper[33013]: I0313 11:12:05.252047 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5gnl\" (UniqueName: \"kubernetes.io/projected/2ac69375-9dd5-4a86-82d7-bf38b6309480-kube-api-access-b5gnl\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:05.687620 master-0 kubenswrapper[33013]: I0313 11:12:05.687518 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" event={"ID":"2ac69375-9dd5-4a86-82d7-bf38b6309480","Type":"ContainerDied","Data":"3b7cce393b2c51be3cc6ede6910dc785b4e3163f92d5d3332d12353119fbf432"} Mar 13 11:12:05.687620 master-0 kubenswrapper[33013]: I0313 11:12:05.687551 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-z6tq4" Mar 13 11:12:05.687620 master-0 kubenswrapper[33013]: I0313 11:12:05.687617 33013 scope.go:117] "RemoveContainer" containerID="a742da3e75ad98f57d8fca64c8905d359664963407cef76c9cb41dc45da71d3a" Mar 13 11:12:05.689713 master-0 kubenswrapper[33013]: I0313 11:12:05.689638 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"11aa0b00-3b31-411e-bc21-1679ffcbc326","Type":"ContainerStarted","Data":"7f7c076c86bf982b67770c333b64c9f2b14fe70670a4ef708b079429f41f1e21"} Mar 13 11:12:05.692035 master-0 kubenswrapper[33013]: I0313 11:12:05.691995 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gvr6s" event={"ID":"df93307e-94fa-45f4-b6b5-5c84b07b116d","Type":"ContainerStarted","Data":"7ad1045d54b782d2e8f5cf999c072ae317f61f5211029003159a16249e76359f"} Mar 13 11:12:05.692168 master-0 kubenswrapper[33013]: I0313 11:12:05.692143 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gvr6s" Mar 13 11:12:05.693580 master-0 kubenswrapper[33013]: I0313 11:12:05.693515 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"816f8748-d874-491e-8509-d05a7f0334c6","Type":"ContainerStarted","Data":"911fbc6a863a66534e82487915e54997bf78c92b779591c631bbc3ea3b877433"} Mar 13 11:12:05.695479 master-0 kubenswrapper[33013]: I0313 11:12:05.695436 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6b0ea88a-1819-4ff0-b669-8635de5bf6f8","Type":"ContainerStarted","Data":"94bfde82bd3816cc631e37992815b2e55e9c9e64c9482bcfcc9b576c6776dd82"} Mar 13 11:12:05.696987 master-0 kubenswrapper[33013]: I0313 11:12:05.696963 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b8kgc" event={"ID":"79ae2dd9-ee06-441a-bced-18a3bec394cf","Type":"ContainerStarted","Data":"5961cbbf8a774c0343ebfe967d7b1ef3278ceb9190e90bfe4fe754e725bbebf7"} Mar 13 11:12:05.716769 master-0 kubenswrapper[33013]: I0313 11:12:05.716676 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gvr6s" podStartSLOduration=14.570615823 podStartE2EDuration="23.716651786s" podCreationTimestamp="2026-03-13 11:11:42 +0000 UTC" firstStartedPulling="2026-03-13 11:11:55.178130991 +0000 UTC m=+898.654084340" lastFinishedPulling="2026-03-13 11:12:04.324166954 +0000 UTC m=+907.800120303" observedRunningTime="2026-03-13 11:12:05.709968547 +0000 UTC m=+909.185921896" watchObservedRunningTime="2026-03-13 11:12:05.716651786 +0000 UTC m=+909.192605145" Mar 13 11:12:05.717034 master-0 kubenswrapper[33013]: I0313 11:12:05.716999 33013 scope.go:117] "RemoveContainer" containerID="89a77a95a01c8dd5ceb34b189a267699d3e8bc233aab52d27740348a90551393" Mar 13 11:12:06.109815 master-0 kubenswrapper[33013]: I0313 11:12:06.109728 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ac69375-9dd5-4a86-82d7-bf38b6309480-config" (OuterVolumeSpecName: "config") pod "2ac69375-9dd5-4a86-82d7-bf38b6309480" (UID: "2ac69375-9dd5-4a86-82d7-bf38b6309480"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:06.114053 master-0 kubenswrapper[33013]: I0313 11:12:06.114003 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ac69375-9dd5-4a86-82d7-bf38b6309480-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ac69375-9dd5-4a86-82d7-bf38b6309480" (UID: "2ac69375-9dd5-4a86-82d7-bf38b6309480"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:06.172202 master-0 kubenswrapper[33013]: I0313 11:12:06.172124 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac69375-9dd5-4a86-82d7-bf38b6309480-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:06.172202 master-0 kubenswrapper[33013]: I0313 11:12:06.172170 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ac69375-9dd5-4a86-82d7-bf38b6309480-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:06.329727 master-0 kubenswrapper[33013]: I0313 11:12:06.329656 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-z6tq4"] Mar 13 11:12:06.374473 master-0 kubenswrapper[33013]: I0313 11:12:06.374387 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-z6tq4"] Mar 13 11:12:06.720918 master-0 kubenswrapper[33013]: I0313 11:12:06.720763 33013 generic.go:334] "Generic (PLEG): container finished" podID="79ae2dd9-ee06-441a-bced-18a3bec394cf" containerID="5961cbbf8a774c0343ebfe967d7b1ef3278ceb9190e90bfe4fe754e725bbebf7" exitCode=0 Mar 13 11:12:06.735011 master-0 kubenswrapper[33013]: I0313 11:12:06.734944 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ac69375-9dd5-4a86-82d7-bf38b6309480" path="/var/lib/kubelet/pods/2ac69375-9dd5-4a86-82d7-bf38b6309480/volumes" Mar 13 11:12:06.736120 master-0 kubenswrapper[33013]: I0313 11:12:06.736065 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b8kgc" event={"ID":"79ae2dd9-ee06-441a-bced-18a3bec394cf","Type":"ContainerDied","Data":"5961cbbf8a774c0343ebfe967d7b1ef3278ceb9190e90bfe4fe754e725bbebf7"} Mar 13 11:12:06.736230 master-0 kubenswrapper[33013]: I0313 11:12:06.736152 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a3ecc521-569a-4aca-9e52-6e504c9f96de","Type":"ContainerStarted","Data":"0365f4a769e3a7eb0fc7be63cfcf5cf439fb1d1ac981c254544f7f8b042eee36"} Mar 13 11:12:06.747323 master-0 kubenswrapper[33013]: I0313 11:12:06.747239 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1267f71c-34a8-4904-bfb6-de85ae27cd8a","Type":"ContainerStarted","Data":"c0952ccd46c81a64fe136b6673f6c0fa6dd5c677de57886b0f7f78a748910537"} Mar 13 11:12:06.751568 master-0 kubenswrapper[33013]: I0313 11:12:06.751494 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a33afdb8-ba14-4d4a-9031-63100db5abe1","Type":"ContainerStarted","Data":"52ea9688c27cffda64c46cba4368b431cb03cc7c78110ce725cc88c7f36bd065"} Mar 13 11:12:06.778132 master-0 kubenswrapper[33013]: I0313 11:12:06.778057 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1fa895db-cffa-4a2b-88e0-cd7b59474721","Type":"ContainerStarted","Data":"11cc2ce206bf8bfdc7f2cb3f6f3ffc2bfe66ebf48ac7361df938d24b527283d8"} Mar 13 11:12:07.274703 master-0 kubenswrapper[33013]: I0313 11:12:07.272641 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-lmj9d"] Mar 13 11:12:07.274703 master-0 kubenswrapper[33013]: E0313 11:12:07.273123 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="436732f6-4687-4878-a8fb-0c70a9ea7521" containerName="init" Mar 13 11:12:07.274703 master-0 kubenswrapper[33013]: I0313 11:12:07.273137 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="436732f6-4687-4878-a8fb-0c70a9ea7521" containerName="init" Mar 13 11:12:07.274703 master-0 kubenswrapper[33013]: E0313 11:12:07.273165 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac69375-9dd5-4a86-82d7-bf38b6309480" containerName="init" Mar 13 11:12:07.274703 master-0 kubenswrapper[33013]: I0313 11:12:07.273171 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac69375-9dd5-4a86-82d7-bf38b6309480" containerName="init" Mar 13 11:12:07.274703 master-0 kubenswrapper[33013]: E0313 11:12:07.273213 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac69375-9dd5-4a86-82d7-bf38b6309480" containerName="dnsmasq-dns" Mar 13 11:12:07.274703 master-0 kubenswrapper[33013]: I0313 11:12:07.273220 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac69375-9dd5-4a86-82d7-bf38b6309480" containerName="dnsmasq-dns" Mar 13 11:12:07.274703 master-0 kubenswrapper[33013]: E0313 11:12:07.273233 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4305cf0-1184-4ece-a830-a7997ae253d3" containerName="init" Mar 13 11:12:07.274703 master-0 kubenswrapper[33013]: I0313 11:12:07.273239 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4305cf0-1184-4ece-a830-a7997ae253d3" containerName="init" Mar 13 11:12:07.274703 master-0 kubenswrapper[33013]: I0313 11:12:07.273456 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="436732f6-4687-4878-a8fb-0c70a9ea7521" containerName="init" Mar 13 11:12:07.274703 master-0 kubenswrapper[33013]: I0313 11:12:07.273472 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac69375-9dd5-4a86-82d7-bf38b6309480" containerName="dnsmasq-dns" Mar 13 11:12:07.274703 master-0 kubenswrapper[33013]: I0313 11:12:07.273484 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4305cf0-1184-4ece-a830-a7997ae253d3" containerName="init" Mar 13 11:12:07.274703 master-0 kubenswrapper[33013]: I0313 11:12:07.274170 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.278152 master-0 kubenswrapper[33013]: I0313 11:12:07.278085 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 13 11:12:07.283022 master-0 kubenswrapper[33013]: I0313 11:12:07.282028 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lmj9d"] Mar 13 11:12:07.336961 master-0 kubenswrapper[33013]: I0313 11:12:07.336813 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/178e6fcb-b721-41b9-aef2-fceec7e95e89-ovs-rundir\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.336961 master-0 kubenswrapper[33013]: I0313 11:12:07.336914 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178e6fcb-b721-41b9-aef2-fceec7e95e89-combined-ca-bundle\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.337353 master-0 kubenswrapper[33013]: I0313 11:12:07.337005 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/178e6fcb-b721-41b9-aef2-fceec7e95e89-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.337353 master-0 kubenswrapper[33013]: I0313 11:12:07.337040 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178e6fcb-b721-41b9-aef2-fceec7e95e89-config\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.337353 master-0 kubenswrapper[33013]: I0313 11:12:07.337062 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/178e6fcb-b721-41b9-aef2-fceec7e95e89-ovn-rundir\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.337353 master-0 kubenswrapper[33013]: I0313 11:12:07.337126 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpj5b\" (UniqueName: \"kubernetes.io/projected/178e6fcb-b721-41b9-aef2-fceec7e95e89-kube-api-access-tpj5b\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.441649 master-0 kubenswrapper[33013]: I0313 11:12:07.441465 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpj5b\" (UniqueName: \"kubernetes.io/projected/178e6fcb-b721-41b9-aef2-fceec7e95e89-kube-api-access-tpj5b\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.441649 master-0 kubenswrapper[33013]: I0313 11:12:07.441547 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/178e6fcb-b721-41b9-aef2-fceec7e95e89-ovs-rundir\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.441649 master-0 kubenswrapper[33013]: I0313 11:12:07.441637 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178e6fcb-b721-41b9-aef2-fceec7e95e89-combined-ca-bundle\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.441970 master-0 kubenswrapper[33013]: I0313 11:12:07.441769 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/178e6fcb-b721-41b9-aef2-fceec7e95e89-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.441970 master-0 kubenswrapper[33013]: I0313 11:12:07.441805 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178e6fcb-b721-41b9-aef2-fceec7e95e89-config\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.441970 master-0 kubenswrapper[33013]: I0313 11:12:07.441837 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/178e6fcb-b721-41b9-aef2-fceec7e95e89-ovn-rundir\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.442062 master-0 kubenswrapper[33013]: I0313 11:12:07.442027 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/178e6fcb-b721-41b9-aef2-fceec7e95e89-ovn-rundir\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.444836 master-0 kubenswrapper[33013]: I0313 11:12:07.442458 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/178e6fcb-b721-41b9-aef2-fceec7e95e89-ovs-rundir\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.452001 master-0 kubenswrapper[33013]: I0313 11:12:07.451637 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178e6fcb-b721-41b9-aef2-fceec7e95e89-combined-ca-bundle\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.456763 master-0 kubenswrapper[33013]: I0313 11:12:07.453356 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178e6fcb-b721-41b9-aef2-fceec7e95e89-config\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.469680 master-0 kubenswrapper[33013]: I0313 11:12:07.468449 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/178e6fcb-b721-41b9-aef2-fceec7e95e89-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.509860 master-0 kubenswrapper[33013]: I0313 11:12:07.509681 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-bccqm"] Mar 13 11:12:07.511926 master-0 kubenswrapper[33013]: I0313 11:12:07.511850 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:07.515443 master-0 kubenswrapper[33013]: I0313 11:12:07.513488 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpj5b\" (UniqueName: \"kubernetes.io/projected/178e6fcb-b721-41b9-aef2-fceec7e95e89-kube-api-access-tpj5b\") pod \"ovn-controller-metrics-lmj9d\" (UID: \"178e6fcb-b721-41b9-aef2-fceec7e95e89\") " pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.515817 master-0 kubenswrapper[33013]: I0313 11:12:07.515732 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 13 11:12:07.559822 master-0 kubenswrapper[33013]: I0313 11:12:07.559735 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-bccqm"] Mar 13 11:12:07.621941 master-0 kubenswrapper[33013]: I0313 11:12:07.621871 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lmj9d" Mar 13 11:12:07.646091 master-0 kubenswrapper[33013]: I0313 11:12:07.646012 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-527pn\" (UniqueName: \"kubernetes.io/projected/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-kube-api-access-527pn\") pod \"dnsmasq-dns-5db7b98cb5-bccqm\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:07.646339 master-0 kubenswrapper[33013]: I0313 11:12:07.646145 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-config\") pod \"dnsmasq-dns-5db7b98cb5-bccqm\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:07.646339 master-0 kubenswrapper[33013]: I0313 11:12:07.646195 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-dns-svc\") pod \"dnsmasq-dns-5db7b98cb5-bccqm\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:07.646339 master-0 kubenswrapper[33013]: I0313 11:12:07.646258 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-ovsdbserver-nb\") pod \"dnsmasq-dns-5db7b98cb5-bccqm\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:07.748187 master-0 kubenswrapper[33013]: I0313 11:12:07.748119 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-ovsdbserver-nb\") pod \"dnsmasq-dns-5db7b98cb5-bccqm\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:07.748476 master-0 kubenswrapper[33013]: I0313 11:12:07.748241 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-527pn\" (UniqueName: \"kubernetes.io/projected/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-kube-api-access-527pn\") pod \"dnsmasq-dns-5db7b98cb5-bccqm\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:07.748476 master-0 kubenswrapper[33013]: I0313 11:12:07.748362 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-config\") pod \"dnsmasq-dns-5db7b98cb5-bccqm\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:07.748560 master-0 kubenswrapper[33013]: I0313 11:12:07.748495 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-dns-svc\") pod \"dnsmasq-dns-5db7b98cb5-bccqm\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:07.752468 master-0 kubenswrapper[33013]: I0313 11:12:07.752425 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-config\") pod \"dnsmasq-dns-5db7b98cb5-bccqm\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:07.754916 master-0 kubenswrapper[33013]: I0313 11:12:07.754660 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-ovsdbserver-nb\") pod \"dnsmasq-dns-5db7b98cb5-bccqm\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:07.754916 master-0 kubenswrapper[33013]: I0313 11:12:07.754722 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-dns-svc\") pod \"dnsmasq-dns-5db7b98cb5-bccqm\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:07.810944 master-0 kubenswrapper[33013]: I0313 11:12:07.810773 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-527pn\" (UniqueName: \"kubernetes.io/projected/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-kube-api-access-527pn\") pod \"dnsmasq-dns-5db7b98cb5-bccqm\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:07.817624 master-0 kubenswrapper[33013]: I0313 11:12:07.815770 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-bccqm"] Mar 13 11:12:07.817624 master-0 kubenswrapper[33013]: I0313 11:12:07.816893 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:07.821514 master-0 kubenswrapper[33013]: I0313 11:12:07.820334 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b8kgc" event={"ID":"79ae2dd9-ee06-441a-bced-18a3bec394cf","Type":"ContainerStarted","Data":"3fa4070f188b2123f3f76f075e4227a56ce2654f8032c2b7596c1488080e9f66"} Mar 13 11:12:07.865878 master-0 kubenswrapper[33013]: I0313 11:12:07.863706 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57bc987d9f-2cpmb"] Mar 13 11:12:07.866120 master-0 kubenswrapper[33013]: I0313 11:12:07.865924 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:07.872256 master-0 kubenswrapper[33013]: I0313 11:12:07.869938 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 13 11:12:08.090031 master-0 kubenswrapper[33013]: I0313 11:12:08.088965 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bc987d9f-2cpmb"] Mar 13 11:12:08.093245 master-0 kubenswrapper[33013]: I0313 11:12:08.092317 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-dns-svc\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.093245 master-0 kubenswrapper[33013]: I0313 11:12:08.092420 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-ovsdbserver-sb\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.093245 master-0 kubenswrapper[33013]: I0313 11:12:08.092539 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-ovsdbserver-nb\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.093245 master-0 kubenswrapper[33013]: I0313 11:12:08.092575 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-config\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.093245 master-0 kubenswrapper[33013]: I0313 11:12:08.092686 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4g6p\" (UniqueName: \"kubernetes.io/projected/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-kube-api-access-c4g6p\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.202122 master-0 kubenswrapper[33013]: I0313 11:12:08.195661 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-dns-svc\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.202122 master-0 kubenswrapper[33013]: I0313 11:12:08.195741 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-ovsdbserver-sb\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.202122 master-0 kubenswrapper[33013]: I0313 11:12:08.195807 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-ovsdbserver-nb\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.202122 master-0 kubenswrapper[33013]: I0313 11:12:08.195838 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-config\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.202122 master-0 kubenswrapper[33013]: I0313 11:12:08.195900 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4g6p\" (UniqueName: \"kubernetes.io/projected/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-kube-api-access-c4g6p\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.202122 master-0 kubenswrapper[33013]: I0313 11:12:08.197551 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-ovsdbserver-sb\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.202122 master-0 kubenswrapper[33013]: I0313 11:12:08.200160 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-ovsdbserver-nb\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.229969 master-0 kubenswrapper[33013]: I0313 11:12:08.229484 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-dns-svc\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.234777 master-0 kubenswrapper[33013]: I0313 11:12:08.233105 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-config\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.241045 master-0 kubenswrapper[33013]: I0313 11:12:08.240016 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4g6p\" (UniqueName: \"kubernetes.io/projected/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-kube-api-access-c4g6p\") pod \"dnsmasq-dns-57bc987d9f-2cpmb\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.340801 master-0 kubenswrapper[33013]: I0313 11:12:08.340284 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:08.622905 master-0 kubenswrapper[33013]: I0313 11:12:08.622436 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lmj9d"] Mar 13 11:12:08.685356 master-0 kubenswrapper[33013]: I0313 11:12:08.684690 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-bccqm"] Mar 13 11:12:08.842545 master-0 kubenswrapper[33013]: I0313 11:12:08.842482 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lmj9d" event={"ID":"178e6fcb-b721-41b9-aef2-fceec7e95e89","Type":"ContainerStarted","Data":"29cd2d1a9705d111c662aab7388dada3b0194a10d40f32b0078d4fccfbe6c6f6"} Mar 13 11:12:08.844164 master-0 kubenswrapper[33013]: I0313 11:12:08.844124 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" event={"ID":"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4","Type":"ContainerStarted","Data":"e18e800534036dc71fd83cdec68869a5384cc93c090a9384929e641109073d66"} Mar 13 11:12:08.847754 master-0 kubenswrapper[33013]: I0313 11:12:08.847696 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b8kgc" event={"ID":"79ae2dd9-ee06-441a-bced-18a3bec394cf","Type":"ContainerStarted","Data":"1392f746efb1d518d9061e3bbdfcc9ab4f29c0e3eba061f200b3e67e88b86ce1"} Mar 13 11:12:08.849498 master-0 kubenswrapper[33013]: I0313 11:12:08.849466 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:12:08.849498 master-0 kubenswrapper[33013]: I0313 11:12:08.849499 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:12:08.945195 master-0 kubenswrapper[33013]: I0313 11:12:08.945096 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-b8kgc" podStartSLOduration=16.643777605 podStartE2EDuration="26.945070667s" podCreationTimestamp="2026-03-13 11:11:42 +0000 UTC" firstStartedPulling="2026-03-13 11:11:53.992761463 +0000 UTC m=+897.468714812" lastFinishedPulling="2026-03-13 11:12:04.294054525 +0000 UTC m=+907.770007874" observedRunningTime="2026-03-13 11:12:08.932953765 +0000 UTC m=+912.408907114" watchObservedRunningTime="2026-03-13 11:12:08.945070667 +0000 UTC m=+912.421024016" Mar 13 11:12:08.974675 master-0 kubenswrapper[33013]: I0313 11:12:08.974567 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bc987d9f-2cpmb"] Mar 13 11:12:09.901912 master-0 kubenswrapper[33013]: I0313 11:12:09.901821 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" event={"ID":"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4","Type":"ContainerStarted","Data":"2287bf9d260e5b1f47e8a5fb2830000b1f1fb31a6b6fd5306b7b08e21bc4c540"} Mar 13 11:12:09.902522 master-0 kubenswrapper[33013]: I0313 11:12:09.902022 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" podUID="a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4" containerName="init" containerID="cri-o://2287bf9d260e5b1f47e8a5fb2830000b1f1fb31a6b6fd5306b7b08e21bc4c540" gracePeriod=10 Mar 13 11:12:12.005366 master-0 kubenswrapper[33013]: I0313 11:12:12.004165 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 13 11:12:12.150153 master-0 kubenswrapper[33013]: W0313 11:12:12.150077 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30faeaae_b9cd_44d7_bb72_66a33ea7f14d.slice/crio-608a6a93d95aeda59d6cc5f878bd5cba6299ab507fae576bfb748af4d060ee59 WatchSource:0}: Error finding container 608a6a93d95aeda59d6cc5f878bd5cba6299ab507fae576bfb748af4d060ee59: Status 404 returned error can't find the container with id 608a6a93d95aeda59d6cc5f878bd5cba6299ab507fae576bfb748af4d060ee59 Mar 13 11:12:12.932318 master-0 kubenswrapper[33013]: I0313 11:12:12.932255 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" event={"ID":"30faeaae-b9cd-44d7-bb72-66a33ea7f14d","Type":"ContainerStarted","Data":"608a6a93d95aeda59d6cc5f878bd5cba6299ab507fae576bfb748af4d060ee59"} Mar 13 11:12:12.934130 master-0 kubenswrapper[33013]: I0313 11:12:12.934092 33013 generic.go:334] "Generic (PLEG): container finished" podID="816f8748-d874-491e-8509-d05a7f0334c6" containerID="911fbc6a863a66534e82487915e54997bf78c92b779591c631bbc3ea3b877433" exitCode=0 Mar 13 11:12:12.934213 master-0 kubenswrapper[33013]: I0313 11:12:12.934141 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"816f8748-d874-491e-8509-d05a7f0334c6","Type":"ContainerDied","Data":"911fbc6a863a66534e82487915e54997bf78c92b779591c631bbc3ea3b877433"} Mar 13 11:12:12.947829 master-0 kubenswrapper[33013]: I0313 11:12:12.947734 33013 generic.go:334] "Generic (PLEG): container finished" podID="1fa895db-cffa-4a2b-88e0-cd7b59474721" containerID="11cc2ce206bf8bfdc7f2cb3f6f3ffc2bfe66ebf48ac7361df938d24b527283d8" exitCode=0 Mar 13 11:12:12.947829 master-0 kubenswrapper[33013]: I0313 11:12:12.947822 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1fa895db-cffa-4a2b-88e0-cd7b59474721","Type":"ContainerDied","Data":"11cc2ce206bf8bfdc7f2cb3f6f3ffc2bfe66ebf48ac7361df938d24b527283d8"} Mar 13 11:12:12.950898 master-0 kubenswrapper[33013]: I0313 11:12:12.950867 33013 generic.go:334] "Generic (PLEG): container finished" podID="a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4" containerID="2287bf9d260e5b1f47e8a5fb2830000b1f1fb31a6b6fd5306b7b08e21bc4c540" exitCode=0 Mar 13 11:12:12.951001 master-0 kubenswrapper[33013]: I0313 11:12:12.950901 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" event={"ID":"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4","Type":"ContainerDied","Data":"2287bf9d260e5b1f47e8a5fb2830000b1f1fb31a6b6fd5306b7b08e21bc4c540"} Mar 13 11:12:13.183898 master-0 kubenswrapper[33013]: I0313 11:12:13.183784 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:13.452603 master-0 kubenswrapper[33013]: I0313 11:12:13.452443 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-config\") pod \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " Mar 13 11:12:13.452603 master-0 kubenswrapper[33013]: I0313 11:12:13.452548 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-ovsdbserver-nb\") pod \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " Mar 13 11:12:13.452603 master-0 kubenswrapper[33013]: I0313 11:12:13.452603 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-dns-svc\") pod \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " Mar 13 11:12:13.452873 master-0 kubenswrapper[33013]: I0313 11:12:13.452665 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-527pn\" (UniqueName: \"kubernetes.io/projected/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-kube-api-access-527pn\") pod \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\" (UID: \"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4\") " Mar 13 11:12:13.459375 master-0 kubenswrapper[33013]: I0313 11:12:13.459295 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-kube-api-access-527pn" (OuterVolumeSpecName: "kube-api-access-527pn") pod "a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4" (UID: "a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4"). InnerVolumeSpecName "kube-api-access-527pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:13.472887 master-0 kubenswrapper[33013]: I0313 11:12:13.472812 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4" (UID: "a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:13.473030 master-0 kubenswrapper[33013]: I0313 11:12:13.472877 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-config" (OuterVolumeSpecName: "config") pod "a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4" (UID: "a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:13.474966 master-0 kubenswrapper[33013]: I0313 11:12:13.474914 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4" (UID: "a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:13.555369 master-0 kubenswrapper[33013]: I0313 11:12:13.555293 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:13.555369 master-0 kubenswrapper[33013]: I0313 11:12:13.555340 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:13.555369 master-0 kubenswrapper[33013]: I0313 11:12:13.555356 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:13.555369 master-0 kubenswrapper[33013]: I0313 11:12:13.555366 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-527pn\" (UniqueName: \"kubernetes.io/projected/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4-kube-api-access-527pn\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:13.968131 master-0 kubenswrapper[33013]: I0313 11:12:13.968030 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1fa895db-cffa-4a2b-88e0-cd7b59474721","Type":"ContainerStarted","Data":"8819d43bbb1291fc00c7c87ca109a1226c8e6bc7f674a748e0bfa73c58135e86"} Mar 13 11:12:13.978087 master-0 kubenswrapper[33013]: I0313 11:12:13.978020 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" event={"ID":"a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4","Type":"ContainerDied","Data":"e18e800534036dc71fd83cdec68869a5384cc93c090a9384929e641109073d66"} Mar 13 11:12:13.978087 master-0 kubenswrapper[33013]: I0313 11:12:13.978091 33013 scope.go:117] "RemoveContainer" containerID="2287bf9d260e5b1f47e8a5fb2830000b1f1fb31a6b6fd5306b7b08e21bc4c540" Mar 13 11:12:13.978444 master-0 kubenswrapper[33013]: I0313 11:12:13.978311 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db7b98cb5-bccqm" Mar 13 11:12:13.988622 master-0 kubenswrapper[33013]: I0313 11:12:13.988291 33013 generic.go:334] "Generic (PLEG): container finished" podID="30faeaae-b9cd-44d7-bb72-66a33ea7f14d" containerID="50d8ac66c871323deb7a2c33996406e3bc7d7b1d977b070b0b6f7b176423a85b" exitCode=0 Mar 13 11:12:13.988622 master-0 kubenswrapper[33013]: I0313 11:12:13.988474 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" event={"ID":"30faeaae-b9cd-44d7-bb72-66a33ea7f14d","Type":"ContainerDied","Data":"50d8ac66c871323deb7a2c33996406e3bc7d7b1d977b070b0b6f7b176423a85b"} Mar 13 11:12:14.001977 master-0 kubenswrapper[33013]: I0313 11:12:14.001925 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"816f8748-d874-491e-8509-d05a7f0334c6","Type":"ContainerStarted","Data":"e8e2edad9705a2793c669c0eec46be73251f6f3dd5c982038fd08ab0ac54d526"} Mar 13 11:12:14.044788 master-0 kubenswrapper[33013]: I0313 11:12:14.043160 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=30.734962531 podStartE2EDuration="40.043136259s" podCreationTimestamp="2026-03-13 11:11:34 +0000 UTC" firstStartedPulling="2026-03-13 11:11:55.204375071 +0000 UTC m=+898.680328420" lastFinishedPulling="2026-03-13 11:12:04.512548799 +0000 UTC m=+907.988502148" observedRunningTime="2026-03-13 11:12:13.995213997 +0000 UTC m=+917.471167356" watchObservedRunningTime="2026-03-13 11:12:14.043136259 +0000 UTC m=+917.519089608" Mar 13 11:12:14.082792 master-0 kubenswrapper[33013]: I0313 11:12:14.081299 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=17.692980924 podStartE2EDuration="35.081278535s" podCreationTimestamp="2026-03-13 11:11:39 +0000 UTC" firstStartedPulling="2026-03-13 11:11:56.381980811 +0000 UTC m=+899.857934160" lastFinishedPulling="2026-03-13 11:12:13.770278412 +0000 UTC m=+917.246231771" observedRunningTime="2026-03-13 11:12:14.055601501 +0000 UTC m=+917.531554870" watchObservedRunningTime="2026-03-13 11:12:14.081278535 +0000 UTC m=+917.557231884" Mar 13 11:12:14.099803 master-0 kubenswrapper[33013]: I0313 11:12:14.099749 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=31.90256534 podStartE2EDuration="41.099465868s" podCreationTimestamp="2026-03-13 11:11:33 +0000 UTC" firstStartedPulling="2026-03-13 11:11:55.150228574 +0000 UTC m=+898.626181923" lastFinishedPulling="2026-03-13 11:12:04.347129102 +0000 UTC m=+907.823082451" observedRunningTime="2026-03-13 11:12:14.096039652 +0000 UTC m=+917.571993021" watchObservedRunningTime="2026-03-13 11:12:14.099465868 +0000 UTC m=+917.575419227" Mar 13 11:12:14.231683 master-0 kubenswrapper[33013]: I0313 11:12:14.231596 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-bccqm"] Mar 13 11:12:14.241979 master-0 kubenswrapper[33013]: I0313 11:12:14.241909 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-bccqm"] Mar 13 11:12:14.248749 master-0 kubenswrapper[33013]: I0313 11:12:14.248662 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.300223524 podStartE2EDuration="30.248639876s" podCreationTimestamp="2026-03-13 11:11:44 +0000 UTC" firstStartedPulling="2026-03-13 11:11:55.846425913 +0000 UTC m=+899.322379262" lastFinishedPulling="2026-03-13 11:12:13.794842265 +0000 UTC m=+917.270795614" observedRunningTime="2026-03-13 11:12:14.192065601 +0000 UTC m=+917.668018950" watchObservedRunningTime="2026-03-13 11:12:14.248639876 +0000 UTC m=+917.724593415" Mar 13 11:12:14.701974 master-0 kubenswrapper[33013]: I0313 11:12:14.701885 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 13 11:12:14.724112 master-0 kubenswrapper[33013]: I0313 11:12:14.724049 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4" path="/var/lib/kubelet/pods/a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4/volumes" Mar 13 11:12:14.939220 master-0 kubenswrapper[33013]: I0313 11:12:14.939152 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 13 11:12:14.978859 master-0 kubenswrapper[33013]: I0313 11:12:14.978737 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 13 11:12:15.031618 master-0 kubenswrapper[33013]: I0313 11:12:15.030640 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" event={"ID":"30faeaae-b9cd-44d7-bb72-66a33ea7f14d","Type":"ContainerStarted","Data":"1c44b95718f62e1765a22af2d03e3dfe0496481b876182af6d5941ac47531813"} Mar 13 11:12:15.031618 master-0 kubenswrapper[33013]: I0313 11:12:15.030864 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:15.034004 master-0 kubenswrapper[33013]: I0313 11:12:15.033354 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a33afdb8-ba14-4d4a-9031-63100db5abe1","Type":"ContainerStarted","Data":"2bad8dbc8e58792186e3b7b042c3b5f10f92422a065e7f2113733aa47e4e69e7"} Mar 13 11:12:15.042677 master-0 kubenswrapper[33013]: I0313 11:12:15.036580 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6b0ea88a-1819-4ff0-b669-8635de5bf6f8","Type":"ContainerStarted","Data":"c18434a0da390b1a8c457b3ad62e348576c8e00767b4aaff4ada2fd7bb5142f1"} Mar 13 11:12:15.042677 master-0 kubenswrapper[33013]: I0313 11:12:15.037178 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 13 11:12:15.042677 master-0 kubenswrapper[33013]: I0313 11:12:15.042109 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lmj9d" event={"ID":"178e6fcb-b721-41b9-aef2-fceec7e95e89","Type":"ContainerStarted","Data":"d246cfe3ee9c9ca7009e938d90687a7a22fd8feabfaea04875a6b508621a7fe4"} Mar 13 11:12:15.071618 master-0 kubenswrapper[33013]: I0313 11:12:15.063642 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" podStartSLOduration=8.063614726 podStartE2EDuration="8.063614726s" podCreationTimestamp="2026-03-13 11:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:12:15.058693848 +0000 UTC m=+918.534647207" watchObservedRunningTime="2026-03-13 11:12:15.063614726 +0000 UTC m=+918.539568075" Mar 13 11:12:15.096624 master-0 kubenswrapper[33013]: I0313 11:12:15.095808 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 13 11:12:15.130220 master-0 kubenswrapper[33013]: I0313 11:12:15.111013 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-lmj9d" podStartSLOduration=2.870892444 podStartE2EDuration="8.110973713s" podCreationTimestamp="2026-03-13 11:12:07 +0000 UTC" firstStartedPulling="2026-03-13 11:12:08.639898398 +0000 UTC m=+912.115851747" lastFinishedPulling="2026-03-13 11:12:13.879979677 +0000 UTC m=+917.355933016" observedRunningTime="2026-03-13 11:12:15.080224555 +0000 UTC m=+918.556177914" watchObservedRunningTime="2026-03-13 11:12:15.110973713 +0000 UTC m=+918.586927062" Mar 13 11:12:15.658913 master-0 kubenswrapper[33013]: I0313 11:12:15.658836 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 13 11:12:15.658913 master-0 kubenswrapper[33013]: I0313 11:12:15.658897 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 13 11:12:16.702703 master-0 kubenswrapper[33013]: I0313 11:12:16.702603 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 13 11:12:16.734012 master-0 kubenswrapper[33013]: I0313 11:12:16.733943 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 13 11:12:16.734012 master-0 kubenswrapper[33013]: I0313 11:12:16.733990 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 13 11:12:16.748700 master-0 kubenswrapper[33013]: I0313 11:12:16.748647 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 13 11:12:17.097186 master-0 kubenswrapper[33013]: I0313 11:12:17.097121 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 13 11:12:17.346556 master-0 kubenswrapper[33013]: I0313 11:12:17.346501 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 13 11:12:17.349359 master-0 kubenswrapper[33013]: E0313 11:12:17.349291 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4" containerName="init" Mar 13 11:12:17.349451 master-0 kubenswrapper[33013]: I0313 11:12:17.349438 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4" containerName="init" Mar 13 11:12:17.349899 master-0 kubenswrapper[33013]: I0313 11:12:17.349881 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7fbe8d4-4e09-4ee9-aff4-2e58514fe3c4" containerName="init" Mar 13 11:12:17.351129 master-0 kubenswrapper[33013]: I0313 11:12:17.351110 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 13 11:12:17.360712 master-0 kubenswrapper[33013]: I0313 11:12:17.355513 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 13 11:12:17.360712 master-0 kubenswrapper[33013]: I0313 11:12:17.355885 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 13 11:12:17.360712 master-0 kubenswrapper[33013]: I0313 11:12:17.356076 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 13 11:12:17.360712 master-0 kubenswrapper[33013]: I0313 11:12:17.356293 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 13 11:12:17.452894 master-0 kubenswrapper[33013]: I0313 11:12:17.452765 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.452894 master-0 kubenswrapper[33013]: I0313 11:12:17.452852 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-scripts\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.452894 master-0 kubenswrapper[33013]: I0313 11:12:17.452897 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlntr\" (UniqueName: \"kubernetes.io/projected/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-kube-api-access-vlntr\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.453449 master-0 kubenswrapper[33013]: I0313 11:12:17.452940 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.453449 master-0 kubenswrapper[33013]: I0313 11:12:17.452980 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-config\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.453449 master-0 kubenswrapper[33013]: I0313 11:12:17.453039 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.453449 master-0 kubenswrapper[33013]: I0313 11:12:17.453306 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.556185 master-0 kubenswrapper[33013]: I0313 11:12:17.556102 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.556185 master-0 kubenswrapper[33013]: I0313 11:12:17.556181 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-config\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.556535 master-0 kubenswrapper[33013]: I0313 11:12:17.556235 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.556535 master-0 kubenswrapper[33013]: I0313 11:12:17.556336 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.556535 master-0 kubenswrapper[33013]: I0313 11:12:17.556391 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.556535 master-0 kubenswrapper[33013]: I0313 11:12:17.556411 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-scripts\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.556535 master-0 kubenswrapper[33013]: I0313 11:12:17.556437 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlntr\" (UniqueName: \"kubernetes.io/projected/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-kube-api-access-vlntr\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.557349 master-0 kubenswrapper[33013]: I0313 11:12:17.557312 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.557919 master-0 kubenswrapper[33013]: I0313 11:12:17.557867 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-scripts\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.558641 master-0 kubenswrapper[33013]: I0313 11:12:17.558448 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-config\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.563764 master-0 kubenswrapper[33013]: I0313 11:12:17.563713 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.563984 master-0 kubenswrapper[33013]: I0313 11:12:17.563926 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.571907 master-0 kubenswrapper[33013]: I0313 11:12:17.571855 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.575717 master-0 kubenswrapper[33013]: I0313 11:12:17.575528 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlntr\" (UniqueName: \"kubernetes.io/projected/bc7f7a2a-5fb8-4542-8f7b-ef67188474af-kube-api-access-vlntr\") pod \"ovn-northd-0\" (UID: \"bc7f7a2a-5fb8-4542-8f7b-ef67188474af\") " pod="openstack/ovn-northd-0" Mar 13 11:12:17.684160 master-0 kubenswrapper[33013]: I0313 11:12:17.683992 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 13 11:12:18.119111 master-0 kubenswrapper[33013]: I0313 11:12:18.118869 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 13 11:12:18.199131 master-0 kubenswrapper[33013]: I0313 11:12:18.199055 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 13 11:12:18.246921 master-0 kubenswrapper[33013]: W0313 11:12:18.246853 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc7f7a2a_5fb8_4542_8f7b_ef67188474af.slice/crio-1bc96df490c845439512232045dc810cd6c2f3fb3e42c64255a2edf157e65bf8 WatchSource:0}: Error finding container 1bc96df490c845439512232045dc810cd6c2f3fb3e42c64255a2edf157e65bf8: Status 404 returned error can't find the container with id 1bc96df490c845439512232045dc810cd6c2f3fb3e42c64255a2edf157e65bf8 Mar 13 11:12:18.251374 master-0 kubenswrapper[33013]: I0313 11:12:18.251282 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 13 11:12:19.134973 master-0 kubenswrapper[33013]: I0313 11:12:19.129226 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bc7f7a2a-5fb8-4542-8f7b-ef67188474af","Type":"ContainerStarted","Data":"1bc96df490c845439512232045dc810cd6c2f3fb3e42c64255a2edf157e65bf8"} Mar 13 11:12:19.134973 master-0 kubenswrapper[33013]: I0313 11:12:19.132870 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bc987d9f-2cpmb"] Mar 13 11:12:19.134973 master-0 kubenswrapper[33013]: I0313 11:12:19.133258 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" podUID="30faeaae-b9cd-44d7-bb72-66a33ea7f14d" containerName="dnsmasq-dns" containerID="cri-o://1c44b95718f62e1765a22af2d03e3dfe0496481b876182af6d5941ac47531813" gracePeriod=10 Mar 13 11:12:19.138770 master-0 kubenswrapper[33013]: I0313 11:12:19.137354 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:19.214421 master-0 kubenswrapper[33013]: I0313 11:12:19.214330 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-trbrb"] Mar 13 11:12:19.217106 master-0 kubenswrapper[33013]: I0313 11:12:19.217034 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.318804 master-0 kubenswrapper[33013]: I0313 11:12:19.318740 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-trbrb"] Mar 13 11:12:19.448689 master-0 kubenswrapper[33013]: I0313 11:12:19.447859 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-dns-svc\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.448689 master-0 kubenswrapper[33013]: I0313 11:12:19.447951 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.448689 master-0 kubenswrapper[33013]: I0313 11:12:19.447986 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.475356 master-0 kubenswrapper[33013]: I0313 11:12:19.448032 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-config\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.475356 master-0 kubenswrapper[33013]: I0313 11:12:19.473385 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2znkm\" (UniqueName: \"kubernetes.io/projected/b91d0010-b2bb-4203-99fe-500d30d7d691-kube-api-access-2znkm\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.575102 master-0 kubenswrapper[33013]: I0313 11:12:19.574990 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-dns-svc\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.575102 master-0 kubenswrapper[33013]: I0313 11:12:19.575100 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.575374 master-0 kubenswrapper[33013]: I0313 11:12:19.575141 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.575374 master-0 kubenswrapper[33013]: I0313 11:12:19.575176 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-config\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.575938 master-0 kubenswrapper[33013]: I0313 11:12:19.575715 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2znkm\" (UniqueName: \"kubernetes.io/projected/b91d0010-b2bb-4203-99fe-500d30d7d691-kube-api-access-2znkm\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.577338 master-0 kubenswrapper[33013]: I0313 11:12:19.577299 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-dns-svc\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.579219 master-0 kubenswrapper[33013]: I0313 11:12:19.578721 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.579498 master-0 kubenswrapper[33013]: I0313 11:12:19.579442 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.581784 master-0 kubenswrapper[33013]: I0313 11:12:19.581661 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-config\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.627542 master-0 kubenswrapper[33013]: I0313 11:12:19.627481 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2znkm\" (UniqueName: \"kubernetes.io/projected/b91d0010-b2bb-4203-99fe-500d30d7d691-kube-api-access-2znkm\") pod \"dnsmasq-dns-5b8649b7f9-trbrb\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:19.658294 master-0 kubenswrapper[33013]: I0313 11:12:19.658222 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:20.172621 master-0 kubenswrapper[33013]: I0313 11:12:20.171557 33013 generic.go:334] "Generic (PLEG): container finished" podID="30faeaae-b9cd-44d7-bb72-66a33ea7f14d" containerID="1c44b95718f62e1765a22af2d03e3dfe0496481b876182af6d5941ac47531813" exitCode=0 Mar 13 11:12:20.172621 master-0 kubenswrapper[33013]: I0313 11:12:20.171633 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" event={"ID":"30faeaae-b9cd-44d7-bb72-66a33ea7f14d","Type":"ContainerDied","Data":"1c44b95718f62e1765a22af2d03e3dfe0496481b876182af6d5941ac47531813"} Mar 13 11:12:20.589667 master-0 kubenswrapper[33013]: I0313 11:12:20.588991 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:20.713087 master-0 kubenswrapper[33013]: I0313 11:12:20.713035 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-ovsdbserver-nb\") pod \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " Mar 13 11:12:20.713280 master-0 kubenswrapper[33013]: I0313 11:12:20.713192 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-dns-svc\") pod \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " Mar 13 11:12:20.713280 master-0 kubenswrapper[33013]: I0313 11:12:20.713248 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-config\") pod \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " Mar 13 11:12:20.713357 master-0 kubenswrapper[33013]: I0313 11:12:20.713316 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4g6p\" (UniqueName: \"kubernetes.io/projected/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-kube-api-access-c4g6p\") pod \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " Mar 13 11:12:20.713390 master-0 kubenswrapper[33013]: I0313 11:12:20.713381 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-ovsdbserver-sb\") pod \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\" (UID: \"30faeaae-b9cd-44d7-bb72-66a33ea7f14d\") " Mar 13 11:12:20.724913 master-0 kubenswrapper[33013]: I0313 11:12:20.724733 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-kube-api-access-c4g6p" (OuterVolumeSpecName: "kube-api-access-c4g6p") pod "30faeaae-b9cd-44d7-bb72-66a33ea7f14d" (UID: "30faeaae-b9cd-44d7-bb72-66a33ea7f14d"). InnerVolumeSpecName "kube-api-access-c4g6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:20.778533 master-0 kubenswrapper[33013]: I0313 11:12:20.768312 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-config" (OuterVolumeSpecName: "config") pod "30faeaae-b9cd-44d7-bb72-66a33ea7f14d" (UID: "30faeaae-b9cd-44d7-bb72-66a33ea7f14d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:20.779222 master-0 kubenswrapper[33013]: I0313 11:12:20.779171 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "30faeaae-b9cd-44d7-bb72-66a33ea7f14d" (UID: "30faeaae-b9cd-44d7-bb72-66a33ea7f14d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:20.779801 master-0 kubenswrapper[33013]: I0313 11:12:20.779739 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "30faeaae-b9cd-44d7-bb72-66a33ea7f14d" (UID: "30faeaae-b9cd-44d7-bb72-66a33ea7f14d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:20.818859 master-0 kubenswrapper[33013]: I0313 11:12:20.817944 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:20.818859 master-0 kubenswrapper[33013]: I0313 11:12:20.818072 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4g6p\" (UniqueName: \"kubernetes.io/projected/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-kube-api-access-c4g6p\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:20.818859 master-0 kubenswrapper[33013]: I0313 11:12:20.818089 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:20.818859 master-0 kubenswrapper[33013]: I0313 11:12:20.818099 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:20.823106 master-0 kubenswrapper[33013]: I0313 11:12:20.823068 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 13 11:12:20.834941 master-0 kubenswrapper[33013]: I0313 11:12:20.834889 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "30faeaae-b9cd-44d7-bb72-66a33ea7f14d" (UID: "30faeaae-b9cd-44d7-bb72-66a33ea7f14d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:20.920082 master-0 kubenswrapper[33013]: I0313 11:12:20.920055 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30faeaae-b9cd-44d7-bb72-66a33ea7f14d-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:20.937865 master-0 kubenswrapper[33013]: I0313 11:12:20.937814 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 13 11:12:20.991580 master-0 kubenswrapper[33013]: I0313 11:12:20.991538 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-trbrb"] Mar 13 11:12:21.003722 master-0 kubenswrapper[33013]: W0313 11:12:21.003663 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb91d0010_b2bb_4203_99fe_500d30d7d691.slice/crio-71337851cfa8193c9d3a6cefc842a19ca871bbefa1752e82ee2c652e584be2d0 WatchSource:0}: Error finding container 71337851cfa8193c9d3a6cefc842a19ca871bbefa1752e82ee2c652e584be2d0: Status 404 returned error can't find the container with id 71337851cfa8193c9d3a6cefc842a19ca871bbefa1752e82ee2c652e584be2d0 Mar 13 11:12:21.190740 master-0 kubenswrapper[33013]: I0313 11:12:21.184489 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" event={"ID":"30faeaae-b9cd-44d7-bb72-66a33ea7f14d","Type":"ContainerDied","Data":"608a6a93d95aeda59d6cc5f878bd5cba6299ab507fae576bfb748af4d060ee59"} Mar 13 11:12:21.190740 master-0 kubenswrapper[33013]: I0313 11:12:21.184562 33013 scope.go:117] "RemoveContainer" containerID="1c44b95718f62e1765a22af2d03e3dfe0496481b876182af6d5941ac47531813" Mar 13 11:12:21.190740 master-0 kubenswrapper[33013]: I0313 11:12:21.184724 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bc987d9f-2cpmb" Mar 13 11:12:21.190740 master-0 kubenswrapper[33013]: I0313 11:12:21.188766 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bc7f7a2a-5fb8-4542-8f7b-ef67188474af","Type":"ContainerStarted","Data":"33cbda4eda9263d0ea57e3d15879c21ed4e934f3a5ca1cd2f01710557145cf3c"} Mar 13 11:12:21.190740 master-0 kubenswrapper[33013]: I0313 11:12:21.188817 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bc7f7a2a-5fb8-4542-8f7b-ef67188474af","Type":"ContainerStarted","Data":"7c9baab82cd37e1a52e26908207e1ec7b67d392294dfbe92047267c8eecc2e77"} Mar 13 11:12:21.190740 master-0 kubenswrapper[33013]: I0313 11:12:21.190327 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 13 11:12:21.193368 master-0 kubenswrapper[33013]: I0313 11:12:21.192945 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" event={"ID":"b91d0010-b2bb-4203-99fe-500d30d7d691","Type":"ContainerStarted","Data":"71337851cfa8193c9d3a6cefc842a19ca871bbefa1752e82ee2c652e584be2d0"} Mar 13 11:12:21.207495 master-0 kubenswrapper[33013]: I0313 11:12:21.207450 33013 scope.go:117] "RemoveContainer" containerID="50d8ac66c871323deb7a2c33996406e3bc7d7b1d977b070b0b6f7b176423a85b" Mar 13 11:12:21.238855 master-0 kubenswrapper[33013]: I0313 11:12:21.237641 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.065623229 podStartE2EDuration="4.237617789s" podCreationTimestamp="2026-03-13 11:12:17 +0000 UTC" firstStartedPulling="2026-03-13 11:12:18.25069396 +0000 UTC m=+921.726647309" lastFinishedPulling="2026-03-13 11:12:20.42268852 +0000 UTC m=+923.898641869" observedRunningTime="2026-03-13 11:12:21.219031474 +0000 UTC m=+924.694984833" watchObservedRunningTime="2026-03-13 11:12:21.237617789 +0000 UTC m=+924.713571138" Mar 13 11:12:21.261375 master-0 kubenswrapper[33013]: I0313 11:12:21.261321 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 13 11:12:21.262414 master-0 kubenswrapper[33013]: E0313 11:12:21.262388 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30faeaae-b9cd-44d7-bb72-66a33ea7f14d" containerName="dnsmasq-dns" Mar 13 11:12:21.262541 master-0 kubenswrapper[33013]: I0313 11:12:21.262528 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="30faeaae-b9cd-44d7-bb72-66a33ea7f14d" containerName="dnsmasq-dns" Mar 13 11:12:21.262652 master-0 kubenswrapper[33013]: E0313 11:12:21.262640 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30faeaae-b9cd-44d7-bb72-66a33ea7f14d" containerName="init" Mar 13 11:12:21.262746 master-0 kubenswrapper[33013]: I0313 11:12:21.262732 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="30faeaae-b9cd-44d7-bb72-66a33ea7f14d" containerName="init" Mar 13 11:12:21.263517 master-0 kubenswrapper[33013]: I0313 11:12:21.263497 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="30faeaae-b9cd-44d7-bb72-66a33ea7f14d" containerName="dnsmasq-dns" Mar 13 11:12:21.305227 master-0 kubenswrapper[33013]: I0313 11:12:21.302190 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bc987d9f-2cpmb"] Mar 13 11:12:21.305227 master-0 kubenswrapper[33013]: I0313 11:12:21.302403 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 13 11:12:21.305227 master-0 kubenswrapper[33013]: I0313 11:12:21.304081 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 13 11:12:21.307677 master-0 kubenswrapper[33013]: I0313 11:12:21.305902 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 13 11:12:21.307677 master-0 kubenswrapper[33013]: I0313 11:12:21.306131 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 13 11:12:21.361673 master-0 kubenswrapper[33013]: I0313 11:12:21.359744 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57bc987d9f-2cpmb"] Mar 13 11:12:21.381989 master-0 kubenswrapper[33013]: I0313 11:12:21.381898 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 13 11:12:21.440426 master-0 kubenswrapper[33013]: I0313 11:12:21.440336 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj8pn\" (UniqueName: \"kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-kube-api-access-bj8pn\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.440812 master-0 kubenswrapper[33013]: I0313 11:12:21.440787 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/303793a4-990d-4b5f-bb44-ff67b1985406-lock\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.441095 master-0 kubenswrapper[33013]: I0313 11:12:21.441043 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6e5bb5ea-5a32-4212-83a8-92c88c49ee62\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f2207d6a-d9d8-4243-9478-86e3445609a0\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.441513 master-0 kubenswrapper[33013]: I0313 11:12:21.441497 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303793a4-990d-4b5f-bb44-ff67b1985406-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.441696 master-0 kubenswrapper[33013]: I0313 11:12:21.441680 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/303793a4-990d-4b5f-bb44-ff67b1985406-cache\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.441877 master-0 kubenswrapper[33013]: I0313 11:12:21.441831 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.543795 master-0 kubenswrapper[33013]: I0313 11:12:21.543632 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303793a4-990d-4b5f-bb44-ff67b1985406-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.543795 master-0 kubenswrapper[33013]: I0313 11:12:21.543740 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/303793a4-990d-4b5f-bb44-ff67b1985406-cache\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.544076 master-0 kubenswrapper[33013]: I0313 11:12:21.543818 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.544076 master-0 kubenswrapper[33013]: I0313 11:12:21.543871 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj8pn\" (UniqueName: \"kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-kube-api-access-bj8pn\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.544076 master-0 kubenswrapper[33013]: I0313 11:12:21.543915 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/303793a4-990d-4b5f-bb44-ff67b1985406-lock\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.544076 master-0 kubenswrapper[33013]: I0313 11:12:21.544000 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6e5bb5ea-5a32-4212-83a8-92c88c49ee62\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f2207d6a-d9d8-4243-9478-86e3445609a0\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.544428 master-0 kubenswrapper[33013]: E0313 11:12:21.544402 33013 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 11:12:21.544519 master-0 kubenswrapper[33013]: E0313 11:12:21.544505 33013 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 11:12:21.544663 master-0 kubenswrapper[33013]: E0313 11:12:21.544649 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift podName:303793a4-990d-4b5f-bb44-ff67b1985406 nodeName:}" failed. No retries permitted until 2026-03-13 11:12:22.044632999 +0000 UTC m=+925.520586348 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift") pod "swift-storage-0" (UID: "303793a4-990d-4b5f-bb44-ff67b1985406") : configmap "swift-ring-files" not found Mar 13 11:12:21.545279 master-0 kubenswrapper[33013]: I0313 11:12:21.545203 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/303793a4-990d-4b5f-bb44-ff67b1985406-cache\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.546894 master-0 kubenswrapper[33013]: I0313 11:12:21.546864 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/303793a4-990d-4b5f-bb44-ff67b1985406-lock\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.547890 master-0 kubenswrapper[33013]: I0313 11:12:21.547845 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:12:21.548016 master-0 kubenswrapper[33013]: I0313 11:12:21.547903 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6e5bb5ea-5a32-4212-83a8-92c88c49ee62\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f2207d6a-d9d8-4243-9478-86e3445609a0\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6c6952f35782aead5fd6592fff161c0a5cbafc38cd2071a415c24850425743ef/globalmount\"" pod="openstack/swift-storage-0" Mar 13 11:12:21.552906 master-0 kubenswrapper[33013]: I0313 11:12:21.552812 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303793a4-990d-4b5f-bb44-ff67b1985406-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:21.567829 master-0 kubenswrapper[33013]: I0313 11:12:21.567782 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj8pn\" (UniqueName: \"kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-kube-api-access-bj8pn\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:22.056414 master-0 kubenswrapper[33013]: I0313 11:12:22.056320 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:22.056701 master-0 kubenswrapper[33013]: E0313 11:12:22.056636 33013 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 11:12:22.056701 master-0 kubenswrapper[33013]: E0313 11:12:22.056656 33013 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 11:12:22.056701 master-0 kubenswrapper[33013]: E0313 11:12:22.056702 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift podName:303793a4-990d-4b5f-bb44-ff67b1985406 nodeName:}" failed. No retries permitted until 2026-03-13 11:12:23.056687384 +0000 UTC m=+926.532640733 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift") pod "swift-storage-0" (UID: "303793a4-990d-4b5f-bb44-ff67b1985406") : configmap "swift-ring-files" not found Mar 13 11:12:22.209054 master-0 kubenswrapper[33013]: I0313 11:12:22.208978 33013 generic.go:334] "Generic (PLEG): container finished" podID="b91d0010-b2bb-4203-99fe-500d30d7d691" containerID="21f3dbe45876a3770a03afe16362eac9fa016be600bd32ecf37f0429082348de" exitCode=0 Mar 13 11:12:22.209612 master-0 kubenswrapper[33013]: I0313 11:12:22.209065 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" event={"ID":"b91d0010-b2bb-4203-99fe-500d30d7d691","Type":"ContainerDied","Data":"21f3dbe45876a3770a03afe16362eac9fa016be600bd32ecf37f0429082348de"} Mar 13 11:12:22.733207 master-0 kubenswrapper[33013]: I0313 11:12:22.733060 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30faeaae-b9cd-44d7-bb72-66a33ea7f14d" path="/var/lib/kubelet/pods/30faeaae-b9cd-44d7-bb72-66a33ea7f14d/volumes" Mar 13 11:12:22.927282 master-0 kubenswrapper[33013]: I0313 11:12:22.927238 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6e5bb5ea-5a32-4212-83a8-92c88c49ee62\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f2207d6a-d9d8-4243-9478-86e3445609a0\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:23.081868 master-0 kubenswrapper[33013]: I0313 11:12:23.081800 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:23.082161 master-0 kubenswrapper[33013]: E0313 11:12:23.082095 33013 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 11:12:23.082161 master-0 kubenswrapper[33013]: E0313 11:12:23.082146 33013 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 11:12:23.082253 master-0 kubenswrapper[33013]: E0313 11:12:23.082232 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift podName:303793a4-990d-4b5f-bb44-ff67b1985406 nodeName:}" failed. No retries permitted until 2026-03-13 11:12:25.082203963 +0000 UTC m=+928.558157332 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift") pod "swift-storage-0" (UID: "303793a4-990d-4b5f-bb44-ff67b1985406") : configmap "swift-ring-files" not found Mar 13 11:12:23.223221 master-0 kubenswrapper[33013]: I0313 11:12:23.223151 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" event={"ID":"b91d0010-b2bb-4203-99fe-500d30d7d691","Type":"ContainerStarted","Data":"b764c2a493281f83a932ef04377d2e7ecfa9a28dc3f7c001b96916d5d7a36b01"} Mar 13 11:12:23.223964 master-0 kubenswrapper[33013]: I0313 11:12:23.223393 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:23.247534 master-0 kubenswrapper[33013]: I0313 11:12:23.247375 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" podStartSLOduration=4.247355972 podStartE2EDuration="4.247355972s" podCreationTimestamp="2026-03-13 11:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:12:23.241930019 +0000 UTC m=+926.717883378" watchObservedRunningTime="2026-03-13 11:12:23.247355972 +0000 UTC m=+926.723309321" Mar 13 11:12:23.568099 master-0 kubenswrapper[33013]: I0313 11:12:23.568027 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-v5mmf"] Mar 13 11:12:23.570026 master-0 kubenswrapper[33013]: I0313 11:12:23.569994 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v5mmf" Mar 13 11:12:23.577615 master-0 kubenswrapper[33013]: I0313 11:12:23.574328 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 13 11:12:23.586761 master-0 kubenswrapper[33013]: I0313 11:12:23.586719 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v5mmf"] Mar 13 11:12:23.697447 master-0 kubenswrapper[33013]: I0313 11:12:23.697373 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmpgn\" (UniqueName: \"kubernetes.io/projected/750dd69c-fa3d-4799-8c7e-42f0328254f6-kube-api-access-mmpgn\") pod \"root-account-create-update-v5mmf\" (UID: \"750dd69c-fa3d-4799-8c7e-42f0328254f6\") " pod="openstack/root-account-create-update-v5mmf" Mar 13 11:12:23.697715 master-0 kubenswrapper[33013]: I0313 11:12:23.697511 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/750dd69c-fa3d-4799-8c7e-42f0328254f6-operator-scripts\") pod \"root-account-create-update-v5mmf\" (UID: \"750dd69c-fa3d-4799-8c7e-42f0328254f6\") " pod="openstack/root-account-create-update-v5mmf" Mar 13 11:12:23.751698 master-0 kubenswrapper[33013]: I0313 11:12:23.751638 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-b7cs5"] Mar 13 11:12:23.753447 master-0 kubenswrapper[33013]: I0313 11:12:23.753397 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-b7cs5" Mar 13 11:12:23.788012 master-0 kubenswrapper[33013]: I0313 11:12:23.787958 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-b7cs5"] Mar 13 11:12:23.800022 master-0 kubenswrapper[33013]: I0313 11:12:23.799918 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/750dd69c-fa3d-4799-8c7e-42f0328254f6-operator-scripts\") pod \"root-account-create-update-v5mmf\" (UID: \"750dd69c-fa3d-4799-8c7e-42f0328254f6\") " pod="openstack/root-account-create-update-v5mmf" Mar 13 11:12:23.800302 master-0 kubenswrapper[33013]: I0313 11:12:23.800253 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d2c12d-e0f4-4826-a942-52d09d6ff4ca-operator-scripts\") pod \"glance-db-create-b7cs5\" (UID: \"b1d2c12d-e0f4-4826-a942-52d09d6ff4ca\") " pod="openstack/glance-db-create-b7cs5" Mar 13 11:12:23.800462 master-0 kubenswrapper[33013]: I0313 11:12:23.800340 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmpgn\" (UniqueName: \"kubernetes.io/projected/750dd69c-fa3d-4799-8c7e-42f0328254f6-kube-api-access-mmpgn\") pod \"root-account-create-update-v5mmf\" (UID: \"750dd69c-fa3d-4799-8c7e-42f0328254f6\") " pod="openstack/root-account-create-update-v5mmf" Mar 13 11:12:23.800462 master-0 kubenswrapper[33013]: I0313 11:12:23.800404 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdvd2\" (UniqueName: \"kubernetes.io/projected/b1d2c12d-e0f4-4826-a942-52d09d6ff4ca-kube-api-access-bdvd2\") pod \"glance-db-create-b7cs5\" (UID: \"b1d2c12d-e0f4-4826-a942-52d09d6ff4ca\") " pod="openstack/glance-db-create-b7cs5" Mar 13 11:12:23.800704 master-0 kubenswrapper[33013]: I0313 11:12:23.800671 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/750dd69c-fa3d-4799-8c7e-42f0328254f6-operator-scripts\") pod \"root-account-create-update-v5mmf\" (UID: \"750dd69c-fa3d-4799-8c7e-42f0328254f6\") " pod="openstack/root-account-create-update-v5mmf" Mar 13 11:12:23.817087 master-0 kubenswrapper[33013]: I0313 11:12:23.817024 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmpgn\" (UniqueName: \"kubernetes.io/projected/750dd69c-fa3d-4799-8c7e-42f0328254f6-kube-api-access-mmpgn\") pod \"root-account-create-update-v5mmf\" (UID: \"750dd69c-fa3d-4799-8c7e-42f0328254f6\") " pod="openstack/root-account-create-update-v5mmf" Mar 13 11:12:23.881966 master-0 kubenswrapper[33013]: I0313 11:12:23.879938 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2dc3-account-create-update-8vftc"] Mar 13 11:12:23.881966 master-0 kubenswrapper[33013]: I0313 11:12:23.881456 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2dc3-account-create-update-8vftc" Mar 13 11:12:23.892719 master-0 kubenswrapper[33013]: I0313 11:12:23.887259 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 13 11:12:23.907494 master-0 kubenswrapper[33013]: I0313 11:12:23.904135 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d2c12d-e0f4-4826-a942-52d09d6ff4ca-operator-scripts\") pod \"glance-db-create-b7cs5\" (UID: \"b1d2c12d-e0f4-4826-a942-52d09d6ff4ca\") " pod="openstack/glance-db-create-b7cs5" Mar 13 11:12:23.907494 master-0 kubenswrapper[33013]: I0313 11:12:23.904257 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdvd2\" (UniqueName: \"kubernetes.io/projected/b1d2c12d-e0f4-4826-a942-52d09d6ff4ca-kube-api-access-bdvd2\") pod \"glance-db-create-b7cs5\" (UID: \"b1d2c12d-e0f4-4826-a942-52d09d6ff4ca\") " pod="openstack/glance-db-create-b7cs5" Mar 13 11:12:23.907494 master-0 kubenswrapper[33013]: I0313 11:12:23.905919 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d2c12d-e0f4-4826-a942-52d09d6ff4ca-operator-scripts\") pod \"glance-db-create-b7cs5\" (UID: \"b1d2c12d-e0f4-4826-a942-52d09d6ff4ca\") " pod="openstack/glance-db-create-b7cs5" Mar 13 11:12:23.907494 master-0 kubenswrapper[33013]: I0313 11:12:23.906645 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v5mmf" Mar 13 11:12:23.938869 master-0 kubenswrapper[33013]: I0313 11:12:23.938820 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2dc3-account-create-update-8vftc"] Mar 13 11:12:23.940780 master-0 kubenswrapper[33013]: I0313 11:12:23.940728 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdvd2\" (UniqueName: \"kubernetes.io/projected/b1d2c12d-e0f4-4826-a942-52d09d6ff4ca-kube-api-access-bdvd2\") pod \"glance-db-create-b7cs5\" (UID: \"b1d2c12d-e0f4-4826-a942-52d09d6ff4ca\") " pod="openstack/glance-db-create-b7cs5" Mar 13 11:12:24.015137 master-0 kubenswrapper[33013]: I0313 11:12:24.015008 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8db9c8d1-1f9e-46a2-b1b6-9398919b760b-operator-scripts\") pod \"glance-2dc3-account-create-update-8vftc\" (UID: \"8db9c8d1-1f9e-46a2-b1b6-9398919b760b\") " pod="openstack/glance-2dc3-account-create-update-8vftc" Mar 13 11:12:24.015137 master-0 kubenswrapper[33013]: I0313 11:12:24.015097 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4bw5\" (UniqueName: \"kubernetes.io/projected/8db9c8d1-1f9e-46a2-b1b6-9398919b760b-kube-api-access-m4bw5\") pod \"glance-2dc3-account-create-update-8vftc\" (UID: \"8db9c8d1-1f9e-46a2-b1b6-9398919b760b\") " pod="openstack/glance-2dc3-account-create-update-8vftc" Mar 13 11:12:24.094637 master-0 kubenswrapper[33013]: I0313 11:12:24.087163 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-b7cs5" Mar 13 11:12:24.102559 master-0 kubenswrapper[33013]: I0313 11:12:24.102507 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-flpq9"] Mar 13 11:12:24.104441 master-0 kubenswrapper[33013]: I0313 11:12:24.104398 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.109554 master-0 kubenswrapper[33013]: I0313 11:12:24.109454 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 13 11:12:24.109976 master-0 kubenswrapper[33013]: I0313 11:12:24.109945 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 13 11:12:24.110171 master-0 kubenswrapper[33013]: I0313 11:12:24.110143 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 13 11:12:24.110408 master-0 kubenswrapper[33013]: I0313 11:12:24.110346 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-flpq9"] Mar 13 11:12:24.120741 master-0 kubenswrapper[33013]: I0313 11:12:24.117207 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8db9c8d1-1f9e-46a2-b1b6-9398919b760b-operator-scripts\") pod \"glance-2dc3-account-create-update-8vftc\" (UID: \"8db9c8d1-1f9e-46a2-b1b6-9398919b760b\") " pod="openstack/glance-2dc3-account-create-update-8vftc" Mar 13 11:12:24.120741 master-0 kubenswrapper[33013]: I0313 11:12:24.117398 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4bw5\" (UniqueName: \"kubernetes.io/projected/8db9c8d1-1f9e-46a2-b1b6-9398919b760b-kube-api-access-m4bw5\") pod \"glance-2dc3-account-create-update-8vftc\" (UID: \"8db9c8d1-1f9e-46a2-b1b6-9398919b760b\") " pod="openstack/glance-2dc3-account-create-update-8vftc" Mar 13 11:12:24.120741 master-0 kubenswrapper[33013]: I0313 11:12:24.117975 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8db9c8d1-1f9e-46a2-b1b6-9398919b760b-operator-scripts\") pod \"glance-2dc3-account-create-update-8vftc\" (UID: \"8db9c8d1-1f9e-46a2-b1b6-9398919b760b\") " pod="openstack/glance-2dc3-account-create-update-8vftc" Mar 13 11:12:24.140211 master-0 kubenswrapper[33013]: I0313 11:12:24.140162 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4bw5\" (UniqueName: \"kubernetes.io/projected/8db9c8d1-1f9e-46a2-b1b6-9398919b760b-kube-api-access-m4bw5\") pod \"glance-2dc3-account-create-update-8vftc\" (UID: \"8db9c8d1-1f9e-46a2-b1b6-9398919b760b\") " pod="openstack/glance-2dc3-account-create-update-8vftc" Mar 13 11:12:24.220074 master-0 kubenswrapper[33013]: I0313 11:12:24.220042 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf20429e-cff0-4482-b2f6-3aab17d64e57-ring-data-devices\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.220502 master-0 kubenswrapper[33013]: I0313 11:12:24.220484 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-combined-ca-bundle\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.220758 master-0 kubenswrapper[33013]: I0313 11:12:24.220739 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf20429e-cff0-4482-b2f6-3aab17d64e57-scripts\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.220886 master-0 kubenswrapper[33013]: I0313 11:12:24.220872 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-swiftconf\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.221069 master-0 kubenswrapper[33013]: I0313 11:12:24.221031 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf20429e-cff0-4482-b2f6-3aab17d64e57-etc-swift\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.227333 master-0 kubenswrapper[33013]: I0313 11:12:24.227239 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwssr\" (UniqueName: \"kubernetes.io/projected/bf20429e-cff0-4482-b2f6-3aab17d64e57-kube-api-access-kwssr\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.227333 master-0 kubenswrapper[33013]: I0313 11:12:24.227325 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-dispersionconf\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.294464 master-0 kubenswrapper[33013]: I0313 11:12:24.291173 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2dc3-account-create-update-8vftc" Mar 13 11:12:24.330953 master-0 kubenswrapper[33013]: I0313 11:12:24.330085 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf20429e-cff0-4482-b2f6-3aab17d64e57-ring-data-devices\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.330953 master-0 kubenswrapper[33013]: I0313 11:12:24.330236 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-combined-ca-bundle\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.330953 master-0 kubenswrapper[33013]: I0313 11:12:24.330283 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf20429e-cff0-4482-b2f6-3aab17d64e57-scripts\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.330953 master-0 kubenswrapper[33013]: I0313 11:12:24.330328 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-swiftconf\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.330953 master-0 kubenswrapper[33013]: I0313 11:12:24.330380 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf20429e-cff0-4482-b2f6-3aab17d64e57-etc-swift\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.330953 master-0 kubenswrapper[33013]: I0313 11:12:24.330490 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwssr\" (UniqueName: \"kubernetes.io/projected/bf20429e-cff0-4482-b2f6-3aab17d64e57-kube-api-access-kwssr\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.330953 master-0 kubenswrapper[33013]: I0313 11:12:24.330529 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-dispersionconf\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.332082 master-0 kubenswrapper[33013]: I0313 11:12:24.332015 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf20429e-cff0-4482-b2f6-3aab17d64e57-etc-swift\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.332082 master-0 kubenswrapper[33013]: I0313 11:12:24.332030 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf20429e-cff0-4482-b2f6-3aab17d64e57-ring-data-devices\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.332334 master-0 kubenswrapper[33013]: I0313 11:12:24.332277 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf20429e-cff0-4482-b2f6-3aab17d64e57-scripts\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.334678 master-0 kubenswrapper[33013]: I0313 11:12:24.334627 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-combined-ca-bundle\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.335037 master-0 kubenswrapper[33013]: I0313 11:12:24.335004 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-dispersionconf\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.335852 master-0 kubenswrapper[33013]: I0313 11:12:24.335816 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-swiftconf\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.359724 master-0 kubenswrapper[33013]: I0313 11:12:24.352760 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwssr\" (UniqueName: \"kubernetes.io/projected/bf20429e-cff0-4482-b2f6-3aab17d64e57-kube-api-access-kwssr\") pod \"swift-ring-rebalance-flpq9\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.389487 master-0 kubenswrapper[33013]: I0313 11:12:24.383831 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v5mmf"] Mar 13 11:12:24.521836 master-0 kubenswrapper[33013]: I0313 11:12:24.521781 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:24.615407 master-0 kubenswrapper[33013]: I0313 11:12:24.615343 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-b7cs5"] Mar 13 11:12:24.786866 master-0 kubenswrapper[33013]: W0313 11:12:24.782177 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8db9c8d1_1f9e_46a2_b1b6_9398919b760b.slice/crio-fb065d1ff62e8aea28107f467d510c185b043f2ec388563de3dcd3406551c00a WatchSource:0}: Error finding container fb065d1ff62e8aea28107f467d510c185b043f2ec388563de3dcd3406551c00a: Status 404 returned error can't find the container with id fb065d1ff62e8aea28107f467d510c185b043f2ec388563de3dcd3406551c00a Mar 13 11:12:24.786866 master-0 kubenswrapper[33013]: I0313 11:12:24.785574 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2dc3-account-create-update-8vftc"] Mar 13 11:12:25.025903 master-0 kubenswrapper[33013]: W0313 11:12:25.025826 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf20429e_cff0_4482_b2f6_3aab17d64e57.slice/crio-dbd3c096486e77968440eb6462e331e88eb052c578708cc6a78714b25c7138d2 WatchSource:0}: Error finding container dbd3c096486e77968440eb6462e331e88eb052c578708cc6a78714b25c7138d2: Status 404 returned error can't find the container with id dbd3c096486e77968440eb6462e331e88eb052c578708cc6a78714b25c7138d2 Mar 13 11:12:25.025988 master-0 kubenswrapper[33013]: I0313 11:12:25.025881 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-flpq9"] Mar 13 11:12:25.151648 master-0 kubenswrapper[33013]: I0313 11:12:25.151554 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:25.152097 master-0 kubenswrapper[33013]: E0313 11:12:25.151824 33013 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 11:12:25.152097 master-0 kubenswrapper[33013]: E0313 11:12:25.151855 33013 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 11:12:25.152097 master-0 kubenswrapper[33013]: E0313 11:12:25.151913 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift podName:303793a4-990d-4b5f-bb44-ff67b1985406 nodeName:}" failed. No retries permitted until 2026-03-13 11:12:29.151896337 +0000 UTC m=+932.627849686 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift") pod "swift-storage-0" (UID: "303793a4-990d-4b5f-bb44-ff67b1985406") : configmap "swift-ring-files" not found Mar 13 11:12:25.253682 master-0 kubenswrapper[33013]: I0313 11:12:25.253581 33013 generic.go:334] "Generic (PLEG): container finished" podID="b1d2c12d-e0f4-4826-a942-52d09d6ff4ca" containerID="3738685ff15ea7554790af06095301059472524beb5eb802a45832df41cbbb42" exitCode=0 Mar 13 11:12:25.254296 master-0 kubenswrapper[33013]: I0313 11:12:25.253759 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-b7cs5" event={"ID":"b1d2c12d-e0f4-4826-a942-52d09d6ff4ca","Type":"ContainerDied","Data":"3738685ff15ea7554790af06095301059472524beb5eb802a45832df41cbbb42"} Mar 13 11:12:25.254296 master-0 kubenswrapper[33013]: I0313 11:12:25.253811 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-b7cs5" event={"ID":"b1d2c12d-e0f4-4826-a942-52d09d6ff4ca","Type":"ContainerStarted","Data":"f492c326c83d731cef7b38c881c943ae4ac8f9254e9a2486dcd562d1bbd30c57"} Mar 13 11:12:25.256331 master-0 kubenswrapper[33013]: I0313 11:12:25.256267 33013 generic.go:334] "Generic (PLEG): container finished" podID="750dd69c-fa3d-4799-8c7e-42f0328254f6" containerID="d96cd90540cc27d403a8cef45a3bdf6266f3e5a99e7e6e0d0eef53846290d34d" exitCode=0 Mar 13 11:12:25.256449 master-0 kubenswrapper[33013]: I0313 11:12:25.256408 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v5mmf" event={"ID":"750dd69c-fa3d-4799-8c7e-42f0328254f6","Type":"ContainerDied","Data":"d96cd90540cc27d403a8cef45a3bdf6266f3e5a99e7e6e0d0eef53846290d34d"} Mar 13 11:12:25.256498 master-0 kubenswrapper[33013]: I0313 11:12:25.256465 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v5mmf" event={"ID":"750dd69c-fa3d-4799-8c7e-42f0328254f6","Type":"ContainerStarted","Data":"19ebea6f0f4a8239384a19e4341d5c79722b5615f77563d2aac5acbcc8032b38"} Mar 13 11:12:25.260020 master-0 kubenswrapper[33013]: I0313 11:12:25.259965 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2dc3-account-create-update-8vftc" event={"ID":"8db9c8d1-1f9e-46a2-b1b6-9398919b760b","Type":"ContainerStarted","Data":"da6c0f955c283dc17f7d75234bdde669ae29d9f4e9cced3bbc5a6b9f5e133f87"} Mar 13 11:12:25.260093 master-0 kubenswrapper[33013]: I0313 11:12:25.260037 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2dc3-account-create-update-8vftc" event={"ID":"8db9c8d1-1f9e-46a2-b1b6-9398919b760b","Type":"ContainerStarted","Data":"fb065d1ff62e8aea28107f467d510c185b043f2ec388563de3dcd3406551c00a"} Mar 13 11:12:25.261957 master-0 kubenswrapper[33013]: I0313 11:12:25.261558 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-flpq9" event={"ID":"bf20429e-cff0-4482-b2f6-3aab17d64e57","Type":"ContainerStarted","Data":"dbd3c096486e77968440eb6462e331e88eb052c578708cc6a78714b25c7138d2"} Mar 13 11:12:25.304731 master-0 kubenswrapper[33013]: I0313 11:12:25.304615 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-2dc3-account-create-update-8vftc" podStartSLOduration=2.304567814 podStartE2EDuration="2.304567814s" podCreationTimestamp="2026-03-13 11:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:12:25.293562754 +0000 UTC m=+928.769516123" watchObservedRunningTime="2026-03-13 11:12:25.304567814 +0000 UTC m=+928.780521163" Mar 13 11:12:26.273122 master-0 kubenswrapper[33013]: I0313 11:12:26.272935 33013 generic.go:334] "Generic (PLEG): container finished" podID="8db9c8d1-1f9e-46a2-b1b6-9398919b760b" containerID="da6c0f955c283dc17f7d75234bdde669ae29d9f4e9cced3bbc5a6b9f5e133f87" exitCode=0 Mar 13 11:12:26.273753 master-0 kubenswrapper[33013]: I0313 11:12:26.273662 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2dc3-account-create-update-8vftc" event={"ID":"8db9c8d1-1f9e-46a2-b1b6-9398919b760b","Type":"ContainerDied","Data":"da6c0f955c283dc17f7d75234bdde669ae29d9f4e9cced3bbc5a6b9f5e133f87"} Mar 13 11:12:27.314989 master-0 kubenswrapper[33013]: I0313 11:12:27.314933 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-pd8xv"] Mar 13 11:12:27.320600 master-0 kubenswrapper[33013]: I0313 11:12:27.320542 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pd8xv" Mar 13 11:12:27.333689 master-0 kubenswrapper[33013]: I0313 11:12:27.333044 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pd8xv"] Mar 13 11:12:27.428392 master-0 kubenswrapper[33013]: I0313 11:12:27.428317 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xzg9\" (UniqueName: \"kubernetes.io/projected/c16c74e7-f812-472d-9023-596975e4f499-kube-api-access-5xzg9\") pod \"keystone-db-create-pd8xv\" (UID: \"c16c74e7-f812-472d-9023-596975e4f499\") " pod="openstack/keystone-db-create-pd8xv" Mar 13 11:12:27.429291 master-0 kubenswrapper[33013]: I0313 11:12:27.429248 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c16c74e7-f812-472d-9023-596975e4f499-operator-scripts\") pod \"keystone-db-create-pd8xv\" (UID: \"c16c74e7-f812-472d-9023-596975e4f499\") " pod="openstack/keystone-db-create-pd8xv" Mar 13 11:12:27.503769 master-0 kubenswrapper[33013]: I0313 11:12:27.503687 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8b1a-account-create-update-gbm6q"] Mar 13 11:12:27.506249 master-0 kubenswrapper[33013]: I0313 11:12:27.506164 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8b1a-account-create-update-gbm6q" Mar 13 11:12:27.509395 master-0 kubenswrapper[33013]: I0313 11:12:27.509341 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 13 11:12:27.532405 master-0 kubenswrapper[33013]: I0313 11:12:27.532338 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xzg9\" (UniqueName: \"kubernetes.io/projected/c16c74e7-f812-472d-9023-596975e4f499-kube-api-access-5xzg9\") pod \"keystone-db-create-pd8xv\" (UID: \"c16c74e7-f812-472d-9023-596975e4f499\") " pod="openstack/keystone-db-create-pd8xv" Mar 13 11:12:27.532657 master-0 kubenswrapper[33013]: I0313 11:12:27.532619 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c16c74e7-f812-472d-9023-596975e4f499-operator-scripts\") pod \"keystone-db-create-pd8xv\" (UID: \"c16c74e7-f812-472d-9023-596975e4f499\") " pod="openstack/keystone-db-create-pd8xv" Mar 13 11:12:27.533629 master-0 kubenswrapper[33013]: I0313 11:12:27.533575 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c16c74e7-f812-472d-9023-596975e4f499-operator-scripts\") pod \"keystone-db-create-pd8xv\" (UID: \"c16c74e7-f812-472d-9023-596975e4f499\") " pod="openstack/keystone-db-create-pd8xv" Mar 13 11:12:27.547259 master-0 kubenswrapper[33013]: I0313 11:12:27.547188 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8b1a-account-create-update-gbm6q"] Mar 13 11:12:27.553107 master-0 kubenswrapper[33013]: I0313 11:12:27.553048 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xzg9\" (UniqueName: \"kubernetes.io/projected/c16c74e7-f812-472d-9023-596975e4f499-kube-api-access-5xzg9\") pod \"keystone-db-create-pd8xv\" (UID: \"c16c74e7-f812-472d-9023-596975e4f499\") " pod="openstack/keystone-db-create-pd8xv" Mar 13 11:12:27.640902 master-0 kubenswrapper[33013]: I0313 11:12:27.617957 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-ll5tp"] Mar 13 11:12:27.640902 master-0 kubenswrapper[33013]: I0313 11:12:27.619410 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ll5tp"] Mar 13 11:12:27.640902 master-0 kubenswrapper[33013]: I0313 11:12:27.619492 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ll5tp" Mar 13 11:12:27.652114 master-0 kubenswrapper[33013]: I0313 11:12:27.652058 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad76c9b2-369d-443a-abe4-09a4081e67de-operator-scripts\") pod \"keystone-8b1a-account-create-update-gbm6q\" (UID: \"ad76c9b2-369d-443a-abe4-09a4081e67de\") " pod="openstack/keystone-8b1a-account-create-update-gbm6q" Mar 13 11:12:27.652114 master-0 kubenswrapper[33013]: I0313 11:12:27.652124 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvf8w\" (UniqueName: \"kubernetes.io/projected/ad76c9b2-369d-443a-abe4-09a4081e67de-kube-api-access-wvf8w\") pod \"keystone-8b1a-account-create-update-gbm6q\" (UID: \"ad76c9b2-369d-443a-abe4-09a4081e67de\") " pod="openstack/keystone-8b1a-account-create-update-gbm6q" Mar 13 11:12:27.653854 master-0 kubenswrapper[33013]: I0313 11:12:27.653787 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pd8xv" Mar 13 11:12:27.720409 master-0 kubenswrapper[33013]: I0313 11:12:27.720315 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7d01-account-create-update-2wzz6"] Mar 13 11:12:27.721863 master-0 kubenswrapper[33013]: I0313 11:12:27.721835 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d01-account-create-update-2wzz6" Mar 13 11:12:27.726975 master-0 kubenswrapper[33013]: I0313 11:12:27.725600 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 13 11:12:27.755821 master-0 kubenswrapper[33013]: I0313 11:12:27.739131 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d01-account-create-update-2wzz6"] Mar 13 11:12:27.755821 master-0 kubenswrapper[33013]: I0313 11:12:27.753999 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5jmr\" (UniqueName: \"kubernetes.io/projected/0ae0d894-8db9-48e7-9ac5-776c822a483c-kube-api-access-m5jmr\") pod \"placement-db-create-ll5tp\" (UID: \"0ae0d894-8db9-48e7-9ac5-776c822a483c\") " pod="openstack/placement-db-create-ll5tp" Mar 13 11:12:27.755821 master-0 kubenswrapper[33013]: I0313 11:12:27.754127 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad76c9b2-369d-443a-abe4-09a4081e67de-operator-scripts\") pod \"keystone-8b1a-account-create-update-gbm6q\" (UID: \"ad76c9b2-369d-443a-abe4-09a4081e67de\") " pod="openstack/keystone-8b1a-account-create-update-gbm6q" Mar 13 11:12:27.755821 master-0 kubenswrapper[33013]: I0313 11:12:27.754157 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvf8w\" (UniqueName: \"kubernetes.io/projected/ad76c9b2-369d-443a-abe4-09a4081e67de-kube-api-access-wvf8w\") pod \"keystone-8b1a-account-create-update-gbm6q\" (UID: \"ad76c9b2-369d-443a-abe4-09a4081e67de\") " pod="openstack/keystone-8b1a-account-create-update-gbm6q" Mar 13 11:12:27.755821 master-0 kubenswrapper[33013]: I0313 11:12:27.754215 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae0d894-8db9-48e7-9ac5-776c822a483c-operator-scripts\") pod \"placement-db-create-ll5tp\" (UID: \"0ae0d894-8db9-48e7-9ac5-776c822a483c\") " pod="openstack/placement-db-create-ll5tp" Mar 13 11:12:27.755821 master-0 kubenswrapper[33013]: I0313 11:12:27.754975 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad76c9b2-369d-443a-abe4-09a4081e67de-operator-scripts\") pod \"keystone-8b1a-account-create-update-gbm6q\" (UID: \"ad76c9b2-369d-443a-abe4-09a4081e67de\") " pod="openstack/keystone-8b1a-account-create-update-gbm6q" Mar 13 11:12:27.782853 master-0 kubenswrapper[33013]: I0313 11:12:27.782795 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvf8w\" (UniqueName: \"kubernetes.io/projected/ad76c9b2-369d-443a-abe4-09a4081e67de-kube-api-access-wvf8w\") pod \"keystone-8b1a-account-create-update-gbm6q\" (UID: \"ad76c9b2-369d-443a-abe4-09a4081e67de\") " pod="openstack/keystone-8b1a-account-create-update-gbm6q" Mar 13 11:12:27.836433 master-0 kubenswrapper[33013]: I0313 11:12:27.836276 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8b1a-account-create-update-gbm6q" Mar 13 11:12:27.856424 master-0 kubenswrapper[33013]: I0313 11:12:27.856339 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae0d894-8db9-48e7-9ac5-776c822a483c-operator-scripts\") pod \"placement-db-create-ll5tp\" (UID: \"0ae0d894-8db9-48e7-9ac5-776c822a483c\") " pod="openstack/placement-db-create-ll5tp" Mar 13 11:12:27.857308 master-0 kubenswrapper[33013]: I0313 11:12:27.857274 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5923a466-b63b-4110-a0d0-535eb1eb2d09-operator-scripts\") pod \"placement-7d01-account-create-update-2wzz6\" (UID: \"5923a466-b63b-4110-a0d0-535eb1eb2d09\") " pod="openstack/placement-7d01-account-create-update-2wzz6" Mar 13 11:12:27.857427 master-0 kubenswrapper[33013]: I0313 11:12:27.857202 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae0d894-8db9-48e7-9ac5-776c822a483c-operator-scripts\") pod \"placement-db-create-ll5tp\" (UID: \"0ae0d894-8db9-48e7-9ac5-776c822a483c\") " pod="openstack/placement-db-create-ll5tp" Mar 13 11:12:27.857542 master-0 kubenswrapper[33013]: I0313 11:12:27.857516 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4zpc\" (UniqueName: \"kubernetes.io/projected/5923a466-b63b-4110-a0d0-535eb1eb2d09-kube-api-access-z4zpc\") pod \"placement-7d01-account-create-update-2wzz6\" (UID: \"5923a466-b63b-4110-a0d0-535eb1eb2d09\") " pod="openstack/placement-7d01-account-create-update-2wzz6" Mar 13 11:12:27.858070 master-0 kubenswrapper[33013]: I0313 11:12:27.858036 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5jmr\" (UniqueName: \"kubernetes.io/projected/0ae0d894-8db9-48e7-9ac5-776c822a483c-kube-api-access-m5jmr\") pod \"placement-db-create-ll5tp\" (UID: \"0ae0d894-8db9-48e7-9ac5-776c822a483c\") " pod="openstack/placement-db-create-ll5tp" Mar 13 11:12:27.878644 master-0 kubenswrapper[33013]: I0313 11:12:27.878558 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5jmr\" (UniqueName: \"kubernetes.io/projected/0ae0d894-8db9-48e7-9ac5-776c822a483c-kube-api-access-m5jmr\") pod \"placement-db-create-ll5tp\" (UID: \"0ae0d894-8db9-48e7-9ac5-776c822a483c\") " pod="openstack/placement-db-create-ll5tp" Mar 13 11:12:27.960562 master-0 kubenswrapper[33013]: I0313 11:12:27.960502 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5923a466-b63b-4110-a0d0-535eb1eb2d09-operator-scripts\") pod \"placement-7d01-account-create-update-2wzz6\" (UID: \"5923a466-b63b-4110-a0d0-535eb1eb2d09\") " pod="openstack/placement-7d01-account-create-update-2wzz6" Mar 13 11:12:27.960562 master-0 kubenswrapper[33013]: I0313 11:12:27.960561 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4zpc\" (UniqueName: \"kubernetes.io/projected/5923a466-b63b-4110-a0d0-535eb1eb2d09-kube-api-access-z4zpc\") pod \"placement-7d01-account-create-update-2wzz6\" (UID: \"5923a466-b63b-4110-a0d0-535eb1eb2d09\") " pod="openstack/placement-7d01-account-create-update-2wzz6" Mar 13 11:12:27.961678 master-0 kubenswrapper[33013]: I0313 11:12:27.961581 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5923a466-b63b-4110-a0d0-535eb1eb2d09-operator-scripts\") pod \"placement-7d01-account-create-update-2wzz6\" (UID: \"5923a466-b63b-4110-a0d0-535eb1eb2d09\") " pod="openstack/placement-7d01-account-create-update-2wzz6" Mar 13 11:12:27.990859 master-0 kubenswrapper[33013]: I0313 11:12:27.990794 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4zpc\" (UniqueName: \"kubernetes.io/projected/5923a466-b63b-4110-a0d0-535eb1eb2d09-kube-api-access-z4zpc\") pod \"placement-7d01-account-create-update-2wzz6\" (UID: \"5923a466-b63b-4110-a0d0-535eb1eb2d09\") " pod="openstack/placement-7d01-account-create-update-2wzz6" Mar 13 11:12:28.015542 master-0 kubenswrapper[33013]: I0313 11:12:28.015486 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ll5tp" Mar 13 11:12:28.046162 master-0 kubenswrapper[33013]: I0313 11:12:28.046083 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d01-account-create-update-2wzz6" Mar 13 11:12:28.983631 master-0 kubenswrapper[33013]: I0313 11:12:28.981719 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v5mmf" Mar 13 11:12:29.060369 master-0 kubenswrapper[33013]: I0313 11:12:29.060218 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-b7cs5" Mar 13 11:12:29.086897 master-0 kubenswrapper[33013]: I0313 11:12:29.080989 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/750dd69c-fa3d-4799-8c7e-42f0328254f6-operator-scripts\") pod \"750dd69c-fa3d-4799-8c7e-42f0328254f6\" (UID: \"750dd69c-fa3d-4799-8c7e-42f0328254f6\") " Mar 13 11:12:29.086897 master-0 kubenswrapper[33013]: I0313 11:12:29.081175 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmpgn\" (UniqueName: \"kubernetes.io/projected/750dd69c-fa3d-4799-8c7e-42f0328254f6-kube-api-access-mmpgn\") pod \"750dd69c-fa3d-4799-8c7e-42f0328254f6\" (UID: \"750dd69c-fa3d-4799-8c7e-42f0328254f6\") " Mar 13 11:12:29.095411 master-0 kubenswrapper[33013]: I0313 11:12:29.095316 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750dd69c-fa3d-4799-8c7e-42f0328254f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "750dd69c-fa3d-4799-8c7e-42f0328254f6" (UID: "750dd69c-fa3d-4799-8c7e-42f0328254f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:29.099384 master-0 kubenswrapper[33013]: I0313 11:12:29.098367 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2dc3-account-create-update-8vftc" Mar 13 11:12:29.116523 master-0 kubenswrapper[33013]: I0313 11:12:29.114504 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750dd69c-fa3d-4799-8c7e-42f0328254f6-kube-api-access-mmpgn" (OuterVolumeSpecName: "kube-api-access-mmpgn") pod "750dd69c-fa3d-4799-8c7e-42f0328254f6" (UID: "750dd69c-fa3d-4799-8c7e-42f0328254f6"). InnerVolumeSpecName "kube-api-access-mmpgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:29.183051 master-0 kubenswrapper[33013]: I0313 11:12:29.182988 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdvd2\" (UniqueName: \"kubernetes.io/projected/b1d2c12d-e0f4-4826-a942-52d09d6ff4ca-kube-api-access-bdvd2\") pod \"b1d2c12d-e0f4-4826-a942-52d09d6ff4ca\" (UID: \"b1d2c12d-e0f4-4826-a942-52d09d6ff4ca\") " Mar 13 11:12:29.183176 master-0 kubenswrapper[33013]: I0313 11:12:29.183139 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4bw5\" (UniqueName: \"kubernetes.io/projected/8db9c8d1-1f9e-46a2-b1b6-9398919b760b-kube-api-access-m4bw5\") pod \"8db9c8d1-1f9e-46a2-b1b6-9398919b760b\" (UID: \"8db9c8d1-1f9e-46a2-b1b6-9398919b760b\") " Mar 13 11:12:29.183257 master-0 kubenswrapper[33013]: I0313 11:12:29.183238 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8db9c8d1-1f9e-46a2-b1b6-9398919b760b-operator-scripts\") pod \"8db9c8d1-1f9e-46a2-b1b6-9398919b760b\" (UID: \"8db9c8d1-1f9e-46a2-b1b6-9398919b760b\") " Mar 13 11:12:29.183330 master-0 kubenswrapper[33013]: I0313 11:12:29.183311 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d2c12d-e0f4-4826-a942-52d09d6ff4ca-operator-scripts\") pod \"b1d2c12d-e0f4-4826-a942-52d09d6ff4ca\" (UID: \"b1d2c12d-e0f4-4826-a942-52d09d6ff4ca\") " Mar 13 11:12:29.183912 master-0 kubenswrapper[33013]: I0313 11:12:29.183786 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:29.184015 master-0 kubenswrapper[33013]: I0313 11:12:29.183964 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmpgn\" (UniqueName: \"kubernetes.io/projected/750dd69c-fa3d-4799-8c7e-42f0328254f6-kube-api-access-mmpgn\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:29.184015 master-0 kubenswrapper[33013]: I0313 11:12:29.183981 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/750dd69c-fa3d-4799-8c7e-42f0328254f6-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:29.184164 master-0 kubenswrapper[33013]: E0313 11:12:29.184083 33013 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 13 11:12:29.184164 master-0 kubenswrapper[33013]: E0313 11:12:29.184096 33013 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 13 11:12:29.184164 master-0 kubenswrapper[33013]: E0313 11:12:29.184139 33013 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift podName:303793a4-990d-4b5f-bb44-ff67b1985406 nodeName:}" failed. No retries permitted until 2026-03-13 11:12:37.184122904 +0000 UTC m=+940.660076253 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift") pod "swift-storage-0" (UID: "303793a4-990d-4b5f-bb44-ff67b1985406") : configmap "swift-ring-files" not found Mar 13 11:12:29.185150 master-0 kubenswrapper[33013]: I0313 11:12:29.185099 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d2c12d-e0f4-4826-a942-52d09d6ff4ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1d2c12d-e0f4-4826-a942-52d09d6ff4ca" (UID: "b1d2c12d-e0f4-4826-a942-52d09d6ff4ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:29.185544 master-0 kubenswrapper[33013]: I0313 11:12:29.185527 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8db9c8d1-1f9e-46a2-b1b6-9398919b760b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8db9c8d1-1f9e-46a2-b1b6-9398919b760b" (UID: "8db9c8d1-1f9e-46a2-b1b6-9398919b760b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:29.188844 master-0 kubenswrapper[33013]: I0313 11:12:29.188776 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d2c12d-e0f4-4826-a942-52d09d6ff4ca-kube-api-access-bdvd2" (OuterVolumeSpecName: "kube-api-access-bdvd2") pod "b1d2c12d-e0f4-4826-a942-52d09d6ff4ca" (UID: "b1d2c12d-e0f4-4826-a942-52d09d6ff4ca"). InnerVolumeSpecName "kube-api-access-bdvd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:29.192097 master-0 kubenswrapper[33013]: I0313 11:12:29.192063 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db9c8d1-1f9e-46a2-b1b6-9398919b760b-kube-api-access-m4bw5" (OuterVolumeSpecName: "kube-api-access-m4bw5") pod "8db9c8d1-1f9e-46a2-b1b6-9398919b760b" (UID: "8db9c8d1-1f9e-46a2-b1b6-9398919b760b"). InnerVolumeSpecName "kube-api-access-m4bw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:29.286196 master-0 kubenswrapper[33013]: I0313 11:12:29.285885 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d2c12d-e0f4-4826-a942-52d09d6ff4ca-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:29.286196 master-0 kubenswrapper[33013]: I0313 11:12:29.285942 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdvd2\" (UniqueName: \"kubernetes.io/projected/b1d2c12d-e0f4-4826-a942-52d09d6ff4ca-kube-api-access-bdvd2\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:29.286196 master-0 kubenswrapper[33013]: I0313 11:12:29.285958 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4bw5\" (UniqueName: \"kubernetes.io/projected/8db9c8d1-1f9e-46a2-b1b6-9398919b760b-kube-api-access-m4bw5\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:29.286196 master-0 kubenswrapper[33013]: I0313 11:12:29.285970 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8db9c8d1-1f9e-46a2-b1b6-9398919b760b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:29.303922 master-0 kubenswrapper[33013]: I0313 11:12:29.303867 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-b7cs5" event={"ID":"b1d2c12d-e0f4-4826-a942-52d09d6ff4ca","Type":"ContainerDied","Data":"f492c326c83d731cef7b38c881c943ae4ac8f9254e9a2486dcd562d1bbd30c57"} Mar 13 11:12:29.303922 master-0 kubenswrapper[33013]: I0313 11:12:29.303916 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f492c326c83d731cef7b38c881c943ae4ac8f9254e9a2486dcd562d1bbd30c57" Mar 13 11:12:29.304241 master-0 kubenswrapper[33013]: I0313 11:12:29.303977 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-b7cs5" Mar 13 11:12:29.312164 master-0 kubenswrapper[33013]: I0313 11:12:29.312102 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v5mmf" event={"ID":"750dd69c-fa3d-4799-8c7e-42f0328254f6","Type":"ContainerDied","Data":"19ebea6f0f4a8239384a19e4341d5c79722b5615f77563d2aac5acbcc8032b38"} Mar 13 11:12:29.312164 master-0 kubenswrapper[33013]: I0313 11:12:29.312151 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19ebea6f0f4a8239384a19e4341d5c79722b5615f77563d2aac5acbcc8032b38" Mar 13 11:12:29.312459 master-0 kubenswrapper[33013]: I0313 11:12:29.312214 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v5mmf" Mar 13 11:12:29.314544 master-0 kubenswrapper[33013]: I0313 11:12:29.314505 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2dc3-account-create-update-8vftc" event={"ID":"8db9c8d1-1f9e-46a2-b1b6-9398919b760b","Type":"ContainerDied","Data":"fb065d1ff62e8aea28107f467d510c185b043f2ec388563de3dcd3406551c00a"} Mar 13 11:12:29.314544 master-0 kubenswrapper[33013]: I0313 11:12:29.314541 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb065d1ff62e8aea28107f467d510c185b043f2ec388563de3dcd3406551c00a" Mar 13 11:12:29.314662 master-0 kubenswrapper[33013]: I0313 11:12:29.314609 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2dc3-account-create-update-8vftc" Mar 13 11:12:29.317844 master-0 kubenswrapper[33013]: I0313 11:12:29.317793 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-flpq9" event={"ID":"bf20429e-cff0-4482-b2f6-3aab17d64e57","Type":"ContainerStarted","Data":"286a3d3fb1fc76c87939b88b353df5f13b0f7db9e4e868978ac90de61ff8349c"} Mar 13 11:12:29.342902 master-0 kubenswrapper[33013]: I0313 11:12:29.342779 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-flpq9" podStartSLOduration=1.477404149 podStartE2EDuration="5.342738578s" podCreationTimestamp="2026-03-13 11:12:24 +0000 UTC" firstStartedPulling="2026-03-13 11:12:25.02863908 +0000 UTC m=+928.504592429" lastFinishedPulling="2026-03-13 11:12:28.893973509 +0000 UTC m=+932.369926858" observedRunningTime="2026-03-13 11:12:29.342040288 +0000 UTC m=+932.817993627" watchObservedRunningTime="2026-03-13 11:12:29.342738578 +0000 UTC m=+932.818691947" Mar 13 11:12:29.617537 master-0 kubenswrapper[33013]: I0313 11:12:29.617470 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pd8xv"] Mar 13 11:12:29.619874 master-0 kubenswrapper[33013]: W0313 11:12:29.619561 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5923a466_b63b_4110_a0d0_535eb1eb2d09.slice/crio-5941cd86ac91f3ce4fecc32be0c16902ad91617cd8e899b8d976d53f356f6b15 WatchSource:0}: Error finding container 5941cd86ac91f3ce4fecc32be0c16902ad91617cd8e899b8d976d53f356f6b15: Status 404 returned error can't find the container with id 5941cd86ac91f3ce4fecc32be0c16902ad91617cd8e899b8d976d53f356f6b15 Mar 13 11:12:29.627121 master-0 kubenswrapper[33013]: I0313 11:12:29.626151 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d01-account-create-update-2wzz6"] Mar 13 11:12:29.627517 master-0 kubenswrapper[33013]: W0313 11:12:29.627481 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad76c9b2_369d_443a_abe4_09a4081e67de.slice/crio-87d528e88e3ceb85a54a7f3b3185fe8ae54840fd67f19c5396c11fffbb63909a WatchSource:0}: Error finding container 87d528e88e3ceb85a54a7f3b3185fe8ae54840fd67f19c5396c11fffbb63909a: Status 404 returned error can't find the container with id 87d528e88e3ceb85a54a7f3b3185fe8ae54840fd67f19c5396c11fffbb63909a Mar 13 11:12:29.638739 master-0 kubenswrapper[33013]: I0313 11:12:29.638667 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8b1a-account-create-update-gbm6q"] Mar 13 11:12:29.659633 master-0 kubenswrapper[33013]: I0313 11:12:29.659551 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:12:29.747862 master-0 kubenswrapper[33013]: I0313 11:12:29.747358 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ll5tp"] Mar 13 11:12:29.787555 master-0 kubenswrapper[33013]: I0313 11:12:29.787508 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8"] Mar 13 11:12:29.787863 master-0 kubenswrapper[33013]: I0313 11:12:29.787825 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" podUID="6b30d941-db8d-4248-bf0b-535afba17d11" containerName="dnsmasq-dns" containerID="cri-o://d79048ee7852dead8586ec2a42ed2a7c8853ce561a5c70975dd4503ee2b377bc" gracePeriod=10 Mar 13 11:12:30.334617 master-0 kubenswrapper[33013]: I0313 11:12:30.332737 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8b1a-account-create-update-gbm6q" event={"ID":"ad76c9b2-369d-443a-abe4-09a4081e67de","Type":"ContainerStarted","Data":"bf3f7f58e1286b34b0194ced784fa72b49a7d468fcf51265b018f83d3711cdd8"} Mar 13 11:12:30.334617 master-0 kubenswrapper[33013]: I0313 11:12:30.332792 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8b1a-account-create-update-gbm6q" event={"ID":"ad76c9b2-369d-443a-abe4-09a4081e67de","Type":"ContainerStarted","Data":"87d528e88e3ceb85a54a7f3b3185fe8ae54840fd67f19c5396c11fffbb63909a"} Mar 13 11:12:30.375610 master-0 kubenswrapper[33013]: I0313 11:12:30.374072 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d01-account-create-update-2wzz6" event={"ID":"5923a466-b63b-4110-a0d0-535eb1eb2d09","Type":"ContainerStarted","Data":"4caaf804b22452dab97d8abd6a9c76e0e654fe32002f200f9a518dd52b2b3454"} Mar 13 11:12:30.375610 master-0 kubenswrapper[33013]: I0313 11:12:30.374143 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d01-account-create-update-2wzz6" event={"ID":"5923a466-b63b-4110-a0d0-535eb1eb2d09","Type":"ContainerStarted","Data":"5941cd86ac91f3ce4fecc32be0c16902ad91617cd8e899b8d976d53f356f6b15"} Mar 13 11:12:30.385614 master-0 kubenswrapper[33013]: I0313 11:12:30.378682 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8b1a-account-create-update-gbm6q" podStartSLOduration=3.378659481 podStartE2EDuration="3.378659481s" podCreationTimestamp="2026-03-13 11:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:12:30.375207194 +0000 UTC m=+933.851160543" watchObservedRunningTime="2026-03-13 11:12:30.378659481 +0000 UTC m=+933.854612830" Mar 13 11:12:30.395764 master-0 kubenswrapper[33013]: I0313 11:12:30.392642 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pd8xv" event={"ID":"c16c74e7-f812-472d-9023-596975e4f499","Type":"ContainerStarted","Data":"5d4551c61a673e60df50d2ec926f46c649584bc356644ef6f4ae35c9e93d839f"} Mar 13 11:12:30.395764 master-0 kubenswrapper[33013]: I0313 11:12:30.392710 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pd8xv" event={"ID":"c16c74e7-f812-472d-9023-596975e4f499","Type":"ContainerStarted","Data":"5494322a00a5b754f3a0a86b68e26fac41cd01ebf205c01f0abc356ccf97196d"} Mar 13 11:12:30.400698 master-0 kubenswrapper[33013]: I0313 11:12:30.396114 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ll5tp" event={"ID":"0ae0d894-8db9-48e7-9ac5-776c822a483c","Type":"ContainerStarted","Data":"5540843b36d3356e6ca9fea2549eef0ae2ef07e4a4e43269a3bb15ce9e503819"} Mar 13 11:12:30.400698 master-0 kubenswrapper[33013]: I0313 11:12:30.396199 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ll5tp" event={"ID":"0ae0d894-8db9-48e7-9ac5-776c822a483c","Type":"ContainerStarted","Data":"42051070e6c22361c9d6f77997d9aee4e6a96b1a379bef037042fb688211442a"} Mar 13 11:12:30.401561 master-0 kubenswrapper[33013]: I0313 11:12:30.401364 33013 generic.go:334] "Generic (PLEG): container finished" podID="6b30d941-db8d-4248-bf0b-535afba17d11" containerID="d79048ee7852dead8586ec2a42ed2a7c8853ce561a5c70975dd4503ee2b377bc" exitCode=0 Mar 13 11:12:30.402663 master-0 kubenswrapper[33013]: I0313 11:12:30.402607 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" event={"ID":"6b30d941-db8d-4248-bf0b-535afba17d11","Type":"ContainerDied","Data":"d79048ee7852dead8586ec2a42ed2a7c8853ce561a5c70975dd4503ee2b377bc"} Mar 13 11:12:30.446673 master-0 kubenswrapper[33013]: I0313 11:12:30.446564 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7d01-account-create-update-2wzz6" podStartSLOduration=3.446539796 podStartE2EDuration="3.446539796s" podCreationTimestamp="2026-03-13 11:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:12:30.404554242 +0000 UTC m=+933.880507591" watchObservedRunningTime="2026-03-13 11:12:30.446539796 +0000 UTC m=+933.922493145" Mar 13 11:12:30.449286 master-0 kubenswrapper[33013]: I0313 11:12:30.449218 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-pd8xv" podStartSLOduration=3.449205801 podStartE2EDuration="3.449205801s" podCreationTimestamp="2026-03-13 11:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:12:30.445149877 +0000 UTC m=+933.921103226" watchObservedRunningTime="2026-03-13 11:12:30.449205801 +0000 UTC m=+933.925159150" Mar 13 11:12:30.483972 master-0 kubenswrapper[33013]: I0313 11:12:30.483860 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-ll5tp" podStartSLOduration=3.483837728 podStartE2EDuration="3.483837728s" podCreationTimestamp="2026-03-13 11:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:12:30.466770867 +0000 UTC m=+933.942724216" watchObservedRunningTime="2026-03-13 11:12:30.483837728 +0000 UTC m=+933.959791077" Mar 13 11:12:30.766380 master-0 kubenswrapper[33013]: I0313 11:12:30.766296 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:12:30.863942 master-0 kubenswrapper[33013]: I0313 11:12:30.863873 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b30d941-db8d-4248-bf0b-535afba17d11-dns-svc\") pod \"6b30d941-db8d-4248-bf0b-535afba17d11\" (UID: \"6b30d941-db8d-4248-bf0b-535afba17d11\") " Mar 13 11:12:30.864255 master-0 kubenswrapper[33013]: I0313 11:12:30.863988 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsvct\" (UniqueName: \"kubernetes.io/projected/6b30d941-db8d-4248-bf0b-535afba17d11-kube-api-access-zsvct\") pod \"6b30d941-db8d-4248-bf0b-535afba17d11\" (UID: \"6b30d941-db8d-4248-bf0b-535afba17d11\") " Mar 13 11:12:30.864255 master-0 kubenswrapper[33013]: I0313 11:12:30.864140 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b30d941-db8d-4248-bf0b-535afba17d11-config\") pod \"6b30d941-db8d-4248-bf0b-535afba17d11\" (UID: \"6b30d941-db8d-4248-bf0b-535afba17d11\") " Mar 13 11:12:30.871016 master-0 kubenswrapper[33013]: I0313 11:12:30.870935 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b30d941-db8d-4248-bf0b-535afba17d11-kube-api-access-zsvct" (OuterVolumeSpecName: "kube-api-access-zsvct") pod "6b30d941-db8d-4248-bf0b-535afba17d11" (UID: "6b30d941-db8d-4248-bf0b-535afba17d11"). InnerVolumeSpecName "kube-api-access-zsvct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:30.906891 master-0 kubenswrapper[33013]: I0313 11:12:30.906695 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b30d941-db8d-4248-bf0b-535afba17d11-config" (OuterVolumeSpecName: "config") pod "6b30d941-db8d-4248-bf0b-535afba17d11" (UID: "6b30d941-db8d-4248-bf0b-535afba17d11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:30.922622 master-0 kubenswrapper[33013]: I0313 11:12:30.922441 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b30d941-db8d-4248-bf0b-535afba17d11-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6b30d941-db8d-4248-bf0b-535afba17d11" (UID: "6b30d941-db8d-4248-bf0b-535afba17d11"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:30.973061 master-0 kubenswrapper[33013]: I0313 11:12:30.967437 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b30d941-db8d-4248-bf0b-535afba17d11-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:30.973061 master-0 kubenswrapper[33013]: I0313 11:12:30.967481 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsvct\" (UniqueName: \"kubernetes.io/projected/6b30d941-db8d-4248-bf0b-535afba17d11-kube-api-access-zsvct\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:30.973061 master-0 kubenswrapper[33013]: I0313 11:12:30.967495 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b30d941-db8d-4248-bf0b-535afba17d11-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:31.414317 master-0 kubenswrapper[33013]: I0313 11:12:31.414231 33013 generic.go:334] "Generic (PLEG): container finished" podID="c16c74e7-f812-472d-9023-596975e4f499" containerID="5d4551c61a673e60df50d2ec926f46c649584bc356644ef6f4ae35c9e93d839f" exitCode=0 Mar 13 11:12:31.415040 master-0 kubenswrapper[33013]: I0313 11:12:31.414363 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pd8xv" event={"ID":"c16c74e7-f812-472d-9023-596975e4f499","Type":"ContainerDied","Data":"5d4551c61a673e60df50d2ec926f46c649584bc356644ef6f4ae35c9e93d839f"} Mar 13 11:12:31.417511 master-0 kubenswrapper[33013]: I0313 11:12:31.417478 33013 generic.go:334] "Generic (PLEG): container finished" podID="0ae0d894-8db9-48e7-9ac5-776c822a483c" containerID="5540843b36d3356e6ca9fea2549eef0ae2ef07e4a4e43269a3bb15ce9e503819" exitCode=0 Mar 13 11:12:31.417687 master-0 kubenswrapper[33013]: I0313 11:12:31.417538 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ll5tp" event={"ID":"0ae0d894-8db9-48e7-9ac5-776c822a483c","Type":"ContainerDied","Data":"5540843b36d3356e6ca9fea2549eef0ae2ef07e4a4e43269a3bb15ce9e503819"} Mar 13 11:12:31.420601 master-0 kubenswrapper[33013]: I0313 11:12:31.420549 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" Mar 13 11:12:31.420710 master-0 kubenswrapper[33013]: I0313 11:12:31.420618 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8" event={"ID":"6b30d941-db8d-4248-bf0b-535afba17d11","Type":"ContainerDied","Data":"abe218f445a8ecd9585cfc50797b501a13bf7532332b82dda30e4d5ebfde6c69"} Mar 13 11:12:31.420710 master-0 kubenswrapper[33013]: I0313 11:12:31.420648 33013 scope.go:117] "RemoveContainer" containerID="d79048ee7852dead8586ec2a42ed2a7c8853ce561a5c70975dd4503ee2b377bc" Mar 13 11:12:31.425608 master-0 kubenswrapper[33013]: I0313 11:12:31.423694 33013 generic.go:334] "Generic (PLEG): container finished" podID="ad76c9b2-369d-443a-abe4-09a4081e67de" containerID="bf3f7f58e1286b34b0194ced784fa72b49a7d468fcf51265b018f83d3711cdd8" exitCode=0 Mar 13 11:12:31.425608 master-0 kubenswrapper[33013]: I0313 11:12:31.423785 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8b1a-account-create-update-gbm6q" event={"ID":"ad76c9b2-369d-443a-abe4-09a4081e67de","Type":"ContainerDied","Data":"bf3f7f58e1286b34b0194ced784fa72b49a7d468fcf51265b018f83d3711cdd8"} Mar 13 11:12:31.428165 master-0 kubenswrapper[33013]: I0313 11:12:31.425835 33013 generic.go:334] "Generic (PLEG): container finished" podID="5923a466-b63b-4110-a0d0-535eb1eb2d09" containerID="4caaf804b22452dab97d8abd6a9c76e0e654fe32002f200f9a518dd52b2b3454" exitCode=0 Mar 13 11:12:31.428165 master-0 kubenswrapper[33013]: I0313 11:12:31.425875 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d01-account-create-update-2wzz6" event={"ID":"5923a466-b63b-4110-a0d0-535eb1eb2d09","Type":"ContainerDied","Data":"4caaf804b22452dab97d8abd6a9c76e0e654fe32002f200f9a518dd52b2b3454"} Mar 13 11:12:31.446705 master-0 kubenswrapper[33013]: I0313 11:12:31.446058 33013 scope.go:117] "RemoveContainer" containerID="56eb9ea835de30be92bd119d41700de864b198e7a7bada85c9e96e0e36985ee7" Mar 13 11:12:31.542835 master-0 kubenswrapper[33013]: I0313 11:12:31.542779 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8"] Mar 13 11:12:31.552579 master-0 kubenswrapper[33013]: I0313 11:12:31.552510 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-c9cs8"] Mar 13 11:12:32.749338 master-0 kubenswrapper[33013]: I0313 11:12:32.749186 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b30d941-db8d-4248-bf0b-535afba17d11" path="/var/lib/kubelet/pods/6b30d941-db8d-4248-bf0b-535afba17d11/volumes" Mar 13 11:12:33.077800 master-0 kubenswrapper[33013]: I0313 11:12:33.077740 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8b1a-account-create-update-gbm6q" Mar 13 11:12:33.229898 master-0 kubenswrapper[33013]: I0313 11:12:33.229827 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad76c9b2-369d-443a-abe4-09a4081e67de-operator-scripts\") pod \"ad76c9b2-369d-443a-abe4-09a4081e67de\" (UID: \"ad76c9b2-369d-443a-abe4-09a4081e67de\") " Mar 13 11:12:33.229898 master-0 kubenswrapper[33013]: I0313 11:12:33.229880 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvf8w\" (UniqueName: \"kubernetes.io/projected/ad76c9b2-369d-443a-abe4-09a4081e67de-kube-api-access-wvf8w\") pod \"ad76c9b2-369d-443a-abe4-09a4081e67de\" (UID: \"ad76c9b2-369d-443a-abe4-09a4081e67de\") " Mar 13 11:12:33.233684 master-0 kubenswrapper[33013]: I0313 11:12:33.231829 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad76c9b2-369d-443a-abe4-09a4081e67de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad76c9b2-369d-443a-abe4-09a4081e67de" (UID: "ad76c9b2-369d-443a-abe4-09a4081e67de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:33.244965 master-0 kubenswrapper[33013]: I0313 11:12:33.244891 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad76c9b2-369d-443a-abe4-09a4081e67de-kube-api-access-wvf8w" (OuterVolumeSpecName: "kube-api-access-wvf8w") pod "ad76c9b2-369d-443a-abe4-09a4081e67de" (UID: "ad76c9b2-369d-443a-abe4-09a4081e67de"). InnerVolumeSpecName "kube-api-access-wvf8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:33.342953 master-0 kubenswrapper[33013]: I0313 11:12:33.332142 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad76c9b2-369d-443a-abe4-09a4081e67de-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:33.342953 master-0 kubenswrapper[33013]: I0313 11:12:33.332173 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvf8w\" (UniqueName: \"kubernetes.io/projected/ad76c9b2-369d-443a-abe4-09a4081e67de-kube-api-access-wvf8w\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:33.354784 master-0 kubenswrapper[33013]: I0313 11:12:33.354749 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d01-account-create-update-2wzz6" Mar 13 11:12:33.369232 master-0 kubenswrapper[33013]: I0313 11:12:33.368387 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ll5tp" Mar 13 11:12:33.396677 master-0 kubenswrapper[33013]: I0313 11:12:33.390542 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pd8xv" Mar 13 11:12:33.434889 master-0 kubenswrapper[33013]: I0313 11:12:33.433489 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5923a466-b63b-4110-a0d0-535eb1eb2d09-operator-scripts\") pod \"5923a466-b63b-4110-a0d0-535eb1eb2d09\" (UID: \"5923a466-b63b-4110-a0d0-535eb1eb2d09\") " Mar 13 11:12:33.434889 master-0 kubenswrapper[33013]: I0313 11:12:33.433825 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4zpc\" (UniqueName: \"kubernetes.io/projected/5923a466-b63b-4110-a0d0-535eb1eb2d09-kube-api-access-z4zpc\") pod \"5923a466-b63b-4110-a0d0-535eb1eb2d09\" (UID: \"5923a466-b63b-4110-a0d0-535eb1eb2d09\") " Mar 13 11:12:33.434889 master-0 kubenswrapper[33013]: I0313 11:12:33.434082 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5923a466-b63b-4110-a0d0-535eb1eb2d09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5923a466-b63b-4110-a0d0-535eb1eb2d09" (UID: "5923a466-b63b-4110-a0d0-535eb1eb2d09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:33.434889 master-0 kubenswrapper[33013]: I0313 11:12:33.434551 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5923a466-b63b-4110-a0d0-535eb1eb2d09-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:33.438071 master-0 kubenswrapper[33013]: I0313 11:12:33.437459 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5923a466-b63b-4110-a0d0-535eb1eb2d09-kube-api-access-z4zpc" (OuterVolumeSpecName: "kube-api-access-z4zpc") pod "5923a466-b63b-4110-a0d0-535eb1eb2d09" (UID: "5923a466-b63b-4110-a0d0-535eb1eb2d09"). InnerVolumeSpecName "kube-api-access-z4zpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:33.465910 master-0 kubenswrapper[33013]: I0313 11:12:33.465864 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d01-account-create-update-2wzz6" event={"ID":"5923a466-b63b-4110-a0d0-535eb1eb2d09","Type":"ContainerDied","Data":"5941cd86ac91f3ce4fecc32be0c16902ad91617cd8e899b8d976d53f356f6b15"} Mar 13 11:12:33.466193 master-0 kubenswrapper[33013]: I0313 11:12:33.466180 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5941cd86ac91f3ce4fecc32be0c16902ad91617cd8e899b8d976d53f356f6b15" Mar 13 11:12:33.466318 master-0 kubenswrapper[33013]: I0313 11:12:33.466306 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d01-account-create-update-2wzz6" Mar 13 11:12:33.468554 master-0 kubenswrapper[33013]: I0313 11:12:33.468537 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pd8xv" event={"ID":"c16c74e7-f812-472d-9023-596975e4f499","Type":"ContainerDied","Data":"5494322a00a5b754f3a0a86b68e26fac41cd01ebf205c01f0abc356ccf97196d"} Mar 13 11:12:33.468667 master-0 kubenswrapper[33013]: I0313 11:12:33.468654 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5494322a00a5b754f3a0a86b68e26fac41cd01ebf205c01f0abc356ccf97196d" Mar 13 11:12:33.468774 master-0 kubenswrapper[33013]: I0313 11:12:33.468762 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pd8xv" Mar 13 11:12:33.471646 master-0 kubenswrapper[33013]: I0313 11:12:33.471630 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ll5tp" event={"ID":"0ae0d894-8db9-48e7-9ac5-776c822a483c","Type":"ContainerDied","Data":"42051070e6c22361c9d6f77997d9aee4e6a96b1a379bef037042fb688211442a"} Mar 13 11:12:33.471737 master-0 kubenswrapper[33013]: I0313 11:12:33.471725 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42051070e6c22361c9d6f77997d9aee4e6a96b1a379bef037042fb688211442a" Mar 13 11:12:33.471826 master-0 kubenswrapper[33013]: I0313 11:12:33.471814 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ll5tp" Mar 13 11:12:33.473987 master-0 kubenswrapper[33013]: I0313 11:12:33.473971 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8b1a-account-create-update-gbm6q" event={"ID":"ad76c9b2-369d-443a-abe4-09a4081e67de","Type":"ContainerDied","Data":"87d528e88e3ceb85a54a7f3b3185fe8ae54840fd67f19c5396c11fffbb63909a"} Mar 13 11:12:33.474077 master-0 kubenswrapper[33013]: I0313 11:12:33.474065 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87d528e88e3ceb85a54a7f3b3185fe8ae54840fd67f19c5396c11fffbb63909a" Mar 13 11:12:33.474386 master-0 kubenswrapper[33013]: I0313 11:12:33.474372 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8b1a-account-create-update-gbm6q" Mar 13 11:12:33.536103 master-0 kubenswrapper[33013]: I0313 11:12:33.535962 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c16c74e7-f812-472d-9023-596975e4f499-operator-scripts\") pod \"c16c74e7-f812-472d-9023-596975e4f499\" (UID: \"c16c74e7-f812-472d-9023-596975e4f499\") " Mar 13 11:12:33.537546 master-0 kubenswrapper[33013]: I0313 11:12:33.536471 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xzg9\" (UniqueName: \"kubernetes.io/projected/c16c74e7-f812-472d-9023-596975e4f499-kube-api-access-5xzg9\") pod \"c16c74e7-f812-472d-9023-596975e4f499\" (UID: \"c16c74e7-f812-472d-9023-596975e4f499\") " Mar 13 11:12:33.537546 master-0 kubenswrapper[33013]: I0313 11:12:33.536570 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae0d894-8db9-48e7-9ac5-776c822a483c-operator-scripts\") pod \"0ae0d894-8db9-48e7-9ac5-776c822a483c\" (UID: \"0ae0d894-8db9-48e7-9ac5-776c822a483c\") " Mar 13 11:12:33.537546 master-0 kubenswrapper[33013]: I0313 11:12:33.536713 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5jmr\" (UniqueName: \"kubernetes.io/projected/0ae0d894-8db9-48e7-9ac5-776c822a483c-kube-api-access-m5jmr\") pod \"0ae0d894-8db9-48e7-9ac5-776c822a483c\" (UID: \"0ae0d894-8db9-48e7-9ac5-776c822a483c\") " Mar 13 11:12:33.537546 master-0 kubenswrapper[33013]: I0313 11:12:33.536724 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c16c74e7-f812-472d-9023-596975e4f499-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c16c74e7-f812-472d-9023-596975e4f499" (UID: "c16c74e7-f812-472d-9023-596975e4f499"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:33.537546 master-0 kubenswrapper[33013]: I0313 11:12:33.537133 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c16c74e7-f812-472d-9023-596975e4f499-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:33.537546 master-0 kubenswrapper[33013]: I0313 11:12:33.537150 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4zpc\" (UniqueName: \"kubernetes.io/projected/5923a466-b63b-4110-a0d0-535eb1eb2d09-kube-api-access-z4zpc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:33.538297 master-0 kubenswrapper[33013]: I0313 11:12:33.538237 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ae0d894-8db9-48e7-9ac5-776c822a483c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ae0d894-8db9-48e7-9ac5-776c822a483c" (UID: "0ae0d894-8db9-48e7-9ac5-776c822a483c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:33.539799 master-0 kubenswrapper[33013]: I0313 11:12:33.539771 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16c74e7-f812-472d-9023-596975e4f499-kube-api-access-5xzg9" (OuterVolumeSpecName: "kube-api-access-5xzg9") pod "c16c74e7-f812-472d-9023-596975e4f499" (UID: "c16c74e7-f812-472d-9023-596975e4f499"). InnerVolumeSpecName "kube-api-access-5xzg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:33.540543 master-0 kubenswrapper[33013]: I0313 11:12:33.540524 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae0d894-8db9-48e7-9ac5-776c822a483c-kube-api-access-m5jmr" (OuterVolumeSpecName: "kube-api-access-m5jmr") pod "0ae0d894-8db9-48e7-9ac5-776c822a483c" (UID: "0ae0d894-8db9-48e7-9ac5-776c822a483c"). InnerVolumeSpecName "kube-api-access-m5jmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:33.639007 master-0 kubenswrapper[33013]: I0313 11:12:33.638958 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xzg9\" (UniqueName: \"kubernetes.io/projected/c16c74e7-f812-472d-9023-596975e4f499-kube-api-access-5xzg9\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:33.639291 master-0 kubenswrapper[33013]: I0313 11:12:33.639275 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae0d894-8db9-48e7-9ac5-776c822a483c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:33.639384 master-0 kubenswrapper[33013]: I0313 11:12:33.639370 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5jmr\" (UniqueName: \"kubernetes.io/projected/0ae0d894-8db9-48e7-9ac5-776c822a483c-kube-api-access-m5jmr\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: I0313 11:12:34.131966 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-69lvv"] Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: E0313 11:12:34.132530 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db9c8d1-1f9e-46a2-b1b6-9398919b760b" containerName="mariadb-account-create-update" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: I0313 11:12:34.132544 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db9c8d1-1f9e-46a2-b1b6-9398919b760b" containerName="mariadb-account-create-update" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: E0313 11:12:34.132565 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad76c9b2-369d-443a-abe4-09a4081e67de" containerName="mariadb-account-create-update" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: I0313 11:12:34.132572 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad76c9b2-369d-443a-abe4-09a4081e67de" containerName="mariadb-account-create-update" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: E0313 11:12:34.132609 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30d941-db8d-4248-bf0b-535afba17d11" containerName="dnsmasq-dns" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: I0313 11:12:34.132616 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30d941-db8d-4248-bf0b-535afba17d11" containerName="dnsmasq-dns" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: E0313 11:12:34.132642 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d2c12d-e0f4-4826-a942-52d09d6ff4ca" containerName="mariadb-database-create" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: I0313 11:12:34.132648 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d2c12d-e0f4-4826-a942-52d09d6ff4ca" containerName="mariadb-database-create" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: E0313 11:12:34.132656 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30d941-db8d-4248-bf0b-535afba17d11" containerName="init" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: I0313 11:12:34.132661 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30d941-db8d-4248-bf0b-535afba17d11" containerName="init" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: E0313 11:12:34.132676 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5923a466-b63b-4110-a0d0-535eb1eb2d09" containerName="mariadb-account-create-update" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: I0313 11:12:34.132682 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="5923a466-b63b-4110-a0d0-535eb1eb2d09" containerName="mariadb-account-create-update" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: E0313 11:12:34.132691 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750dd69c-fa3d-4799-8c7e-42f0328254f6" containerName="mariadb-account-create-update" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: I0313 11:12:34.132699 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="750dd69c-fa3d-4799-8c7e-42f0328254f6" containerName="mariadb-account-create-update" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: E0313 11:12:34.132716 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae0d894-8db9-48e7-9ac5-776c822a483c" containerName="mariadb-database-create" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: I0313 11:12:34.132722 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae0d894-8db9-48e7-9ac5-776c822a483c" containerName="mariadb-database-create" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: E0313 11:12:34.132732 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16c74e7-f812-472d-9023-596975e4f499" containerName="mariadb-database-create" Mar 13 11:12:34.132706 master-0 kubenswrapper[33013]: I0313 11:12:34.132739 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16c74e7-f812-472d-9023-596975e4f499" containerName="mariadb-database-create" Mar 13 11:12:34.134133 master-0 kubenswrapper[33013]: I0313 11:12:34.132950 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="5923a466-b63b-4110-a0d0-535eb1eb2d09" containerName="mariadb-account-create-update" Mar 13 11:12:34.134133 master-0 kubenswrapper[33013]: I0313 11:12:34.132989 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16c74e7-f812-472d-9023-596975e4f499" containerName="mariadb-database-create" Mar 13 11:12:34.134133 master-0 kubenswrapper[33013]: I0313 11:12:34.133001 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d2c12d-e0f4-4826-a942-52d09d6ff4ca" containerName="mariadb-database-create" Mar 13 11:12:34.134133 master-0 kubenswrapper[33013]: I0313 11:12:34.133016 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad76c9b2-369d-443a-abe4-09a4081e67de" containerName="mariadb-account-create-update" Mar 13 11:12:34.134133 master-0 kubenswrapper[33013]: I0313 11:12:34.133035 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b30d941-db8d-4248-bf0b-535afba17d11" containerName="dnsmasq-dns" Mar 13 11:12:34.134133 master-0 kubenswrapper[33013]: I0313 11:12:34.133057 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="8db9c8d1-1f9e-46a2-b1b6-9398919b760b" containerName="mariadb-account-create-update" Mar 13 11:12:34.134133 master-0 kubenswrapper[33013]: I0313 11:12:34.133072 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae0d894-8db9-48e7-9ac5-776c822a483c" containerName="mariadb-database-create" Mar 13 11:12:34.134133 master-0 kubenswrapper[33013]: I0313 11:12:34.133102 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="750dd69c-fa3d-4799-8c7e-42f0328254f6" containerName="mariadb-account-create-update" Mar 13 11:12:34.134133 master-0 kubenswrapper[33013]: I0313 11:12:34.133852 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-69lvv" Mar 13 11:12:34.154616 master-0 kubenswrapper[33013]: I0313 11:12:34.145156 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-87aa4-config-data" Mar 13 11:12:34.203727 master-0 kubenswrapper[33013]: I0313 11:12:34.203670 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-69lvv"] Mar 13 11:12:34.274067 master-0 kubenswrapper[33013]: I0313 11:12:34.273990 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-config-data\") pod \"glance-db-sync-69lvv\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " pod="openstack/glance-db-sync-69lvv" Mar 13 11:12:34.274315 master-0 kubenswrapper[33013]: I0313 11:12:34.274094 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-db-sync-config-data\") pod \"glance-db-sync-69lvv\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " pod="openstack/glance-db-sync-69lvv" Mar 13 11:12:34.274315 master-0 kubenswrapper[33013]: I0313 11:12:34.274132 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-combined-ca-bundle\") pod \"glance-db-sync-69lvv\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " pod="openstack/glance-db-sync-69lvv" Mar 13 11:12:34.274657 master-0 kubenswrapper[33013]: I0313 11:12:34.274605 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s99r\" (UniqueName: \"kubernetes.io/projected/ec43ecb9-e354-475a-aa0e-4dbe06716927-kube-api-access-9s99r\") pod \"glance-db-sync-69lvv\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " pod="openstack/glance-db-sync-69lvv" Mar 13 11:12:34.342783 master-0 kubenswrapper[33013]: I0313 11:12:34.342668 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-v5mmf"] Mar 13 11:12:34.353307 master-0 kubenswrapper[33013]: I0313 11:12:34.353241 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-v5mmf"] Mar 13 11:12:34.377739 master-0 kubenswrapper[33013]: I0313 11:12:34.377632 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-config-data\") pod \"glance-db-sync-69lvv\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " pod="openstack/glance-db-sync-69lvv" Mar 13 11:12:34.378068 master-0 kubenswrapper[33013]: I0313 11:12:34.378049 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-db-sync-config-data\") pod \"glance-db-sync-69lvv\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " pod="openstack/glance-db-sync-69lvv" Mar 13 11:12:34.378766 master-0 kubenswrapper[33013]: I0313 11:12:34.378145 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-combined-ca-bundle\") pod \"glance-db-sync-69lvv\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " pod="openstack/glance-db-sync-69lvv" Mar 13 11:12:34.378997 master-0 kubenswrapper[33013]: I0313 11:12:34.378972 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s99r\" (UniqueName: \"kubernetes.io/projected/ec43ecb9-e354-475a-aa0e-4dbe06716927-kube-api-access-9s99r\") pod \"glance-db-sync-69lvv\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " pod="openstack/glance-db-sync-69lvv" Mar 13 11:12:34.386370 master-0 kubenswrapper[33013]: I0313 11:12:34.385782 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-combined-ca-bundle\") pod \"glance-db-sync-69lvv\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " pod="openstack/glance-db-sync-69lvv" Mar 13 11:12:34.386370 master-0 kubenswrapper[33013]: I0313 11:12:34.386070 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-config-data\") pod \"glance-db-sync-69lvv\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " pod="openstack/glance-db-sync-69lvv" Mar 13 11:12:34.393522 master-0 kubenswrapper[33013]: I0313 11:12:34.393460 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-db-sync-config-data\") pod \"glance-db-sync-69lvv\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " pod="openstack/glance-db-sync-69lvv" Mar 13 11:12:34.408366 master-0 kubenswrapper[33013]: I0313 11:12:34.407842 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s99r\" (UniqueName: \"kubernetes.io/projected/ec43ecb9-e354-475a-aa0e-4dbe06716927-kube-api-access-9s99r\") pod \"glance-db-sync-69lvv\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " pod="openstack/glance-db-sync-69lvv" Mar 13 11:12:34.501864 master-0 kubenswrapper[33013]: I0313 11:12:34.501767 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-69lvv" Mar 13 11:12:34.745622 master-0 kubenswrapper[33013]: I0313 11:12:34.745437 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750dd69c-fa3d-4799-8c7e-42f0328254f6" path="/var/lib/kubelet/pods/750dd69c-fa3d-4799-8c7e-42f0328254f6/volumes" Mar 13 11:12:35.265674 master-0 kubenswrapper[33013]: I0313 11:12:35.263902 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-69lvv"] Mar 13 11:12:35.271615 master-0 kubenswrapper[33013]: W0313 11:12:35.271550 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec43ecb9_e354_475a_aa0e_4dbe06716927.slice/crio-96c280857d76d9635a70e14a4b1ffef9871778d4985df5b8e7d29b58b70f3fad WatchSource:0}: Error finding container 96c280857d76d9635a70e14a4b1ffef9871778d4985df5b8e7d29b58b70f3fad: Status 404 returned error can't find the container with id 96c280857d76d9635a70e14a4b1ffef9871778d4985df5b8e7d29b58b70f3fad Mar 13 11:12:35.537321 master-0 kubenswrapper[33013]: I0313 11:12:35.537180 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-69lvv" event={"ID":"ec43ecb9-e354-475a-aa0e-4dbe06716927","Type":"ContainerStarted","Data":"96c280857d76d9635a70e14a4b1ffef9871778d4985df5b8e7d29b58b70f3fad"} Mar 13 11:12:36.549711 master-0 kubenswrapper[33013]: I0313 11:12:36.549647 33013 generic.go:334] "Generic (PLEG): container finished" podID="bf20429e-cff0-4482-b2f6-3aab17d64e57" containerID="286a3d3fb1fc76c87939b88b353df5f13b0f7db9e4e868978ac90de61ff8349c" exitCode=0 Mar 13 11:12:36.549711 master-0 kubenswrapper[33013]: I0313 11:12:36.549715 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-flpq9" event={"ID":"bf20429e-cff0-4482-b2f6-3aab17d64e57","Type":"ContainerDied","Data":"286a3d3fb1fc76c87939b88b353df5f13b0f7db9e4e868978ac90de61ff8349c"} Mar 13 11:12:37.254569 master-0 kubenswrapper[33013]: I0313 11:12:37.254469 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:37.260656 master-0 kubenswrapper[33013]: I0313 11:12:37.259636 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/303793a4-990d-4b5f-bb44-ff67b1985406-etc-swift\") pod \"swift-storage-0\" (UID: \"303793a4-990d-4b5f-bb44-ff67b1985406\") " pod="openstack/swift-storage-0" Mar 13 11:12:37.279002 master-0 kubenswrapper[33013]: I0313 11:12:37.278327 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 13 11:12:37.774603 master-0 kubenswrapper[33013]: I0313 11:12:37.774471 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 13 11:12:37.808668 master-0 kubenswrapper[33013]: I0313 11:12:37.804195 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 13 11:12:38.170493 master-0 kubenswrapper[33013]: I0313 11:12:38.170424 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:12:38.173639 master-0 kubenswrapper[33013]: I0313 11:12:38.173525 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-b8kgc" Mar 13 11:12:38.194357 master-0 kubenswrapper[33013]: I0313 11:12:38.194250 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:38.256650 master-0 kubenswrapper[33013]: I0313 11:12:38.248898 33013 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gvr6s" podUID="df93307e-94fa-45f4-b6b5-5c84b07b116d" containerName="ovn-controller" probeResult="failure" output=< Mar 13 11:12:38.256650 master-0 kubenswrapper[33013]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 13 11:12:38.256650 master-0 kubenswrapper[33013]: > Mar 13 11:12:38.292208 master-0 kubenswrapper[33013]: I0313 11:12:38.290880 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-swiftconf\") pod \"bf20429e-cff0-4482-b2f6-3aab17d64e57\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " Mar 13 11:12:38.292208 master-0 kubenswrapper[33013]: I0313 11:12:38.291031 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf20429e-cff0-4482-b2f6-3aab17d64e57-scripts\") pod \"bf20429e-cff0-4482-b2f6-3aab17d64e57\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " Mar 13 11:12:38.292208 master-0 kubenswrapper[33013]: I0313 11:12:38.291085 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-combined-ca-bundle\") pod \"bf20429e-cff0-4482-b2f6-3aab17d64e57\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " Mar 13 11:12:38.292208 master-0 kubenswrapper[33013]: I0313 11:12:38.291113 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-dispersionconf\") pod \"bf20429e-cff0-4482-b2f6-3aab17d64e57\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " Mar 13 11:12:38.292208 master-0 kubenswrapper[33013]: I0313 11:12:38.291223 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwssr\" (UniqueName: \"kubernetes.io/projected/bf20429e-cff0-4482-b2f6-3aab17d64e57-kube-api-access-kwssr\") pod \"bf20429e-cff0-4482-b2f6-3aab17d64e57\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " Mar 13 11:12:38.292208 master-0 kubenswrapper[33013]: I0313 11:12:38.291275 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf20429e-cff0-4482-b2f6-3aab17d64e57-ring-data-devices\") pod \"bf20429e-cff0-4482-b2f6-3aab17d64e57\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " Mar 13 11:12:38.292208 master-0 kubenswrapper[33013]: I0313 11:12:38.291322 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf20429e-cff0-4482-b2f6-3aab17d64e57-etc-swift\") pod \"bf20429e-cff0-4482-b2f6-3aab17d64e57\" (UID: \"bf20429e-cff0-4482-b2f6-3aab17d64e57\") " Mar 13 11:12:38.292968 master-0 kubenswrapper[33013]: I0313 11:12:38.292877 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf20429e-cff0-4482-b2f6-3aab17d64e57-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bf20429e-cff0-4482-b2f6-3aab17d64e57" (UID: "bf20429e-cff0-4482-b2f6-3aab17d64e57"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:38.293137 master-0 kubenswrapper[33013]: I0313 11:12:38.293096 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf20429e-cff0-4482-b2f6-3aab17d64e57-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bf20429e-cff0-4482-b2f6-3aab17d64e57" (UID: "bf20429e-cff0-4482-b2f6-3aab17d64e57"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:12:38.296127 master-0 kubenswrapper[33013]: I0313 11:12:38.296056 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf20429e-cff0-4482-b2f6-3aab17d64e57-kube-api-access-kwssr" (OuterVolumeSpecName: "kube-api-access-kwssr") pod "bf20429e-cff0-4482-b2f6-3aab17d64e57" (UID: "bf20429e-cff0-4482-b2f6-3aab17d64e57"). InnerVolumeSpecName "kube-api-access-kwssr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:38.305773 master-0 kubenswrapper[33013]: I0313 11:12:38.305714 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bf20429e-cff0-4482-b2f6-3aab17d64e57" (UID: "bf20429e-cff0-4482-b2f6-3aab17d64e57"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:12:38.347352 master-0 kubenswrapper[33013]: I0313 11:12:38.343936 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf20429e-cff0-4482-b2f6-3aab17d64e57" (UID: "bf20429e-cff0-4482-b2f6-3aab17d64e57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:12:38.347352 master-0 kubenswrapper[33013]: I0313 11:12:38.344755 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf20429e-cff0-4482-b2f6-3aab17d64e57-scripts" (OuterVolumeSpecName: "scripts") pod "bf20429e-cff0-4482-b2f6-3aab17d64e57" (UID: "bf20429e-cff0-4482-b2f6-3aab17d64e57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:38.347352 master-0 kubenswrapper[33013]: I0313 11:12:38.347318 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bf20429e-cff0-4482-b2f6-3aab17d64e57" (UID: "bf20429e-cff0-4482-b2f6-3aab17d64e57"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:12:38.395219 master-0 kubenswrapper[33013]: I0313 11:12:38.395131 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwssr\" (UniqueName: \"kubernetes.io/projected/bf20429e-cff0-4482-b2f6-3aab17d64e57-kube-api-access-kwssr\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:38.395219 master-0 kubenswrapper[33013]: I0313 11:12:38.395204 33013 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf20429e-cff0-4482-b2f6-3aab17d64e57-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:38.395219 master-0 kubenswrapper[33013]: I0313 11:12:38.395224 33013 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf20429e-cff0-4482-b2f6-3aab17d64e57-etc-swift\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:38.395550 master-0 kubenswrapper[33013]: I0313 11:12:38.395242 33013 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-swiftconf\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:38.395550 master-0 kubenswrapper[33013]: I0313 11:12:38.395258 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf20429e-cff0-4482-b2f6-3aab17d64e57-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:38.395550 master-0 kubenswrapper[33013]: I0313 11:12:38.395277 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:38.395550 master-0 kubenswrapper[33013]: I0313 11:12:38.395293 33013 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf20429e-cff0-4482-b2f6-3aab17d64e57-dispersionconf\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:38.503797 master-0 kubenswrapper[33013]: I0313 11:12:38.497729 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gvr6s-config-lcn4z"] Mar 13 11:12:38.503797 master-0 kubenswrapper[33013]: E0313 11:12:38.498281 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf20429e-cff0-4482-b2f6-3aab17d64e57" containerName="swift-ring-rebalance" Mar 13 11:12:38.503797 master-0 kubenswrapper[33013]: I0313 11:12:38.498353 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf20429e-cff0-4482-b2f6-3aab17d64e57" containerName="swift-ring-rebalance" Mar 13 11:12:38.503797 master-0 kubenswrapper[33013]: I0313 11:12:38.498668 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf20429e-cff0-4482-b2f6-3aab17d64e57" containerName="swift-ring-rebalance" Mar 13 11:12:38.503797 master-0 kubenswrapper[33013]: I0313 11:12:38.499448 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.508716 master-0 kubenswrapper[33013]: I0313 11:12:38.506054 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 13 11:12:38.510358 master-0 kubenswrapper[33013]: I0313 11:12:38.510297 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gvr6s-config-lcn4z"] Mar 13 11:12:38.589248 master-0 kubenswrapper[33013]: I0313 11:12:38.589177 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1267f71c-34a8-4904-bfb6-de85ae27cd8a","Type":"ContainerDied","Data":"c0952ccd46c81a64fe136b6673f6c0fa6dd5c677de57886b0f7f78a748910537"} Mar 13 11:12:38.589707 master-0 kubenswrapper[33013]: I0313 11:12:38.589182 33013 generic.go:334] "Generic (PLEG): container finished" podID="1267f71c-34a8-4904-bfb6-de85ae27cd8a" containerID="c0952ccd46c81a64fe136b6673f6c0fa6dd5c677de57886b0f7f78a748910537" exitCode=0 Mar 13 11:12:38.601236 master-0 kubenswrapper[33013]: I0313 11:12:38.601171 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"fea7ac6e79aec5ae690287c09dad3ad17513ec6277a6de8a128d354a55436522"} Mar 13 11:12:38.606418 master-0 kubenswrapper[33013]: I0313 11:12:38.605383 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-run-ovn\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.606418 master-0 kubenswrapper[33013]: I0313 11:12:38.605567 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46mqj\" (UniqueName: \"kubernetes.io/projected/ee87cc1e-7cc5-4619-8672-7f79b2db790b-kube-api-access-46mqj\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.606418 master-0 kubenswrapper[33013]: I0313 11:12:38.605743 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ee87cc1e-7cc5-4619-8672-7f79b2db790b-additional-scripts\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.606418 master-0 kubenswrapper[33013]: I0313 11:12:38.605791 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-run\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.606418 master-0 kubenswrapper[33013]: I0313 11:12:38.605811 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee87cc1e-7cc5-4619-8672-7f79b2db790b-scripts\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.606418 master-0 kubenswrapper[33013]: I0313 11:12:38.605841 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-log-ovn\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.606418 master-0 kubenswrapper[33013]: I0313 11:12:38.606195 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-flpq9" event={"ID":"bf20429e-cff0-4482-b2f6-3aab17d64e57","Type":"ContainerDied","Data":"dbd3c096486e77968440eb6462e331e88eb052c578708cc6a78714b25c7138d2"} Mar 13 11:12:38.606418 master-0 kubenswrapper[33013]: I0313 11:12:38.606225 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbd3c096486e77968440eb6462e331e88eb052c578708cc6a78714b25c7138d2" Mar 13 11:12:38.606418 master-0 kubenswrapper[33013]: I0313 11:12:38.606302 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-flpq9" Mar 13 11:12:38.613595 master-0 kubenswrapper[33013]: I0313 11:12:38.613525 33013 generic.go:334] "Generic (PLEG): container finished" podID="a3ecc521-569a-4aca-9e52-6e504c9f96de" containerID="0365f4a769e3a7eb0fc7be63cfcf5cf439fb1d1ac981c254544f7f8b042eee36" exitCode=0 Mar 13 11:12:38.613844 master-0 kubenswrapper[33013]: I0313 11:12:38.613787 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a3ecc521-569a-4aca-9e52-6e504c9f96de","Type":"ContainerDied","Data":"0365f4a769e3a7eb0fc7be63cfcf5cf439fb1d1ac981c254544f7f8b042eee36"} Mar 13 11:12:38.708521 master-0 kubenswrapper[33013]: I0313 11:12:38.708478 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ee87cc1e-7cc5-4619-8672-7f79b2db790b-additional-scripts\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.709457 master-0 kubenswrapper[33013]: I0313 11:12:38.709129 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-run\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.709653 master-0 kubenswrapper[33013]: I0313 11:12:38.708863 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-run\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.709755 master-0 kubenswrapper[33013]: I0313 11:12:38.709739 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee87cc1e-7cc5-4619-8672-7f79b2db790b-scripts\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.709861 master-0 kubenswrapper[33013]: I0313 11:12:38.709840 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-log-ovn\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.709949 master-0 kubenswrapper[33013]: I0313 11:12:38.709913 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-log-ovn\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.710363 master-0 kubenswrapper[33013]: I0313 11:12:38.710320 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-run-ovn\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.710891 master-0 kubenswrapper[33013]: I0313 11:12:38.710670 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-run-ovn\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.712059 master-0 kubenswrapper[33013]: I0313 11:12:38.711567 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46mqj\" (UniqueName: \"kubernetes.io/projected/ee87cc1e-7cc5-4619-8672-7f79b2db790b-kube-api-access-46mqj\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.714374 master-0 kubenswrapper[33013]: I0313 11:12:38.714323 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ee87cc1e-7cc5-4619-8672-7f79b2db790b-additional-scripts\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.715041 master-0 kubenswrapper[33013]: I0313 11:12:38.714987 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee87cc1e-7cc5-4619-8672-7f79b2db790b-scripts\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.730109 master-0 kubenswrapper[33013]: I0313 11:12:38.730066 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46mqj\" (UniqueName: \"kubernetes.io/projected/ee87cc1e-7cc5-4619-8672-7f79b2db790b-kube-api-access-46mqj\") pod \"ovn-controller-gvr6s-config-lcn4z\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:38.820824 master-0 kubenswrapper[33013]: I0313 11:12:38.820767 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:39.574217 master-0 kubenswrapper[33013]: I0313 11:12:39.574044 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gvr6s-config-lcn4z"] Mar 13 11:12:39.642541 master-0 kubenswrapper[33013]: I0313 11:12:39.641255 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a3ecc521-569a-4aca-9e52-6e504c9f96de","Type":"ContainerStarted","Data":"c52aa12c9b989323c2f2bfe6de558236e43657541111caba2ab4aec38cf5f479"} Mar 13 11:12:39.642541 master-0 kubenswrapper[33013]: I0313 11:12:39.641575 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:12:39.647040 master-0 kubenswrapper[33013]: I0313 11:12:39.646991 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1267f71c-34a8-4904-bfb6-de85ae27cd8a","Type":"ContainerStarted","Data":"8cdadf9de308d4102150eac332c13e2d0f531c6f51f30f847ef27faf4206dd9e"} Mar 13 11:12:39.647288 master-0 kubenswrapper[33013]: I0313 11:12:39.647248 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 13 11:12:39.736131 master-0 kubenswrapper[33013]: I0313 11:12:39.726338 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fsnm8"] Mar 13 11:12:39.736131 master-0 kubenswrapper[33013]: I0313 11:12:39.728284 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fsnm8" Mar 13 11:12:39.736131 master-0 kubenswrapper[33013]: I0313 11:12:39.731370 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 13 11:12:39.741177 master-0 kubenswrapper[33013]: I0313 11:12:39.740290 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fsnm8"] Mar 13 11:12:39.749653 master-0 kubenswrapper[33013]: I0313 11:12:39.748658 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=58.110972048 podStartE2EDuration="1m6.748633581s" podCreationTimestamp="2026-03-13 11:11:33 +0000 UTC" firstStartedPulling="2026-03-13 11:11:55.700087405 +0000 UTC m=+899.176040754" lastFinishedPulling="2026-03-13 11:12:04.337748938 +0000 UTC m=+907.813702287" observedRunningTime="2026-03-13 11:12:39.724738707 +0000 UTC m=+943.200692056" watchObservedRunningTime="2026-03-13 11:12:39.748633581 +0000 UTC m=+943.224586930" Mar 13 11:12:39.765654 master-0 kubenswrapper[33013]: I0313 11:12:39.763882 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=58.625754331 podStartE2EDuration="1m7.76384464s" podCreationTimestamp="2026-03-13 11:11:32 +0000 UTC" firstStartedPulling="2026-03-13 11:11:55.155627856 +0000 UTC m=+898.631581205" lastFinishedPulling="2026-03-13 11:12:04.293718165 +0000 UTC m=+907.769671514" observedRunningTime="2026-03-13 11:12:39.754083795 +0000 UTC m=+943.230037164" watchObservedRunningTime="2026-03-13 11:12:39.76384464 +0000 UTC m=+943.239797989" Mar 13 11:12:39.946777 master-0 kubenswrapper[33013]: I0313 11:12:39.946513 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm65d\" (UniqueName: \"kubernetes.io/projected/0906c20c-c44b-4754-921c-3c934a52b11d-kube-api-access-hm65d\") pod \"root-account-create-update-fsnm8\" (UID: \"0906c20c-c44b-4754-921c-3c934a52b11d\") " pod="openstack/root-account-create-update-fsnm8" Mar 13 11:12:39.946777 master-0 kubenswrapper[33013]: I0313 11:12:39.946655 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0906c20c-c44b-4754-921c-3c934a52b11d-operator-scripts\") pod \"root-account-create-update-fsnm8\" (UID: \"0906c20c-c44b-4754-921c-3c934a52b11d\") " pod="openstack/root-account-create-update-fsnm8" Mar 13 11:12:40.050664 master-0 kubenswrapper[33013]: I0313 11:12:40.048886 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm65d\" (UniqueName: \"kubernetes.io/projected/0906c20c-c44b-4754-921c-3c934a52b11d-kube-api-access-hm65d\") pod \"root-account-create-update-fsnm8\" (UID: \"0906c20c-c44b-4754-921c-3c934a52b11d\") " pod="openstack/root-account-create-update-fsnm8" Mar 13 11:12:40.050664 master-0 kubenswrapper[33013]: I0313 11:12:40.048964 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0906c20c-c44b-4754-921c-3c934a52b11d-operator-scripts\") pod \"root-account-create-update-fsnm8\" (UID: \"0906c20c-c44b-4754-921c-3c934a52b11d\") " pod="openstack/root-account-create-update-fsnm8" Mar 13 11:12:40.050664 master-0 kubenswrapper[33013]: I0313 11:12:40.049871 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0906c20c-c44b-4754-921c-3c934a52b11d-operator-scripts\") pod \"root-account-create-update-fsnm8\" (UID: \"0906c20c-c44b-4754-921c-3c934a52b11d\") " pod="openstack/root-account-create-update-fsnm8" Mar 13 11:12:40.068248 master-0 kubenswrapper[33013]: I0313 11:12:40.067705 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm65d\" (UniqueName: \"kubernetes.io/projected/0906c20c-c44b-4754-921c-3c934a52b11d-kube-api-access-hm65d\") pod \"root-account-create-update-fsnm8\" (UID: \"0906c20c-c44b-4754-921c-3c934a52b11d\") " pod="openstack/root-account-create-update-fsnm8" Mar 13 11:12:40.087944 master-0 kubenswrapper[33013]: I0313 11:12:40.087400 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fsnm8" Mar 13 11:12:40.635122 master-0 kubenswrapper[33013]: I0313 11:12:40.631930 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fsnm8"] Mar 13 11:12:40.642007 master-0 kubenswrapper[33013]: W0313 11:12:40.640955 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0906c20c_c44b_4754_921c_3c934a52b11d.slice/crio-44c48239f8afe7d1a75a9e489ee3f562721f782dfe79deaa1fb3dfbfd9402fb0 WatchSource:0}: Error finding container 44c48239f8afe7d1a75a9e489ee3f562721f782dfe79deaa1fb3dfbfd9402fb0: Status 404 returned error can't find the container with id 44c48239f8afe7d1a75a9e489ee3f562721f782dfe79deaa1fb3dfbfd9402fb0 Mar 13 11:12:40.675158 master-0 kubenswrapper[33013]: I0313 11:12:40.675041 33013 generic.go:334] "Generic (PLEG): container finished" podID="ee87cc1e-7cc5-4619-8672-7f79b2db790b" containerID="a8093acf32ff7e518db478a11ffab4795f90f6d19750181ab05d58a7423594e2" exitCode=0 Mar 13 11:12:40.675395 master-0 kubenswrapper[33013]: I0313 11:12:40.675229 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gvr6s-config-lcn4z" event={"ID":"ee87cc1e-7cc5-4619-8672-7f79b2db790b","Type":"ContainerDied","Data":"a8093acf32ff7e518db478a11ffab4795f90f6d19750181ab05d58a7423594e2"} Mar 13 11:12:40.675395 master-0 kubenswrapper[33013]: I0313 11:12:40.675268 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gvr6s-config-lcn4z" event={"ID":"ee87cc1e-7cc5-4619-8672-7f79b2db790b","Type":"ContainerStarted","Data":"992bbf20f1e051da7192e4c9e988083187afc38ff3c2dc9188576956556d477d"} Mar 13 11:12:40.679396 master-0 kubenswrapper[33013]: I0313 11:12:40.679341 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"45f37652725f1e2dedec48ff564d59972cd0d3f2cfb072fa874539f73635de76"} Mar 13 11:12:40.679497 master-0 kubenswrapper[33013]: I0313 11:12:40.679405 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"cd171cd97ef573949ab35c8ad507090cf5784e3839127aa3dff6f245453551ee"} Mar 13 11:12:40.681627 master-0 kubenswrapper[33013]: I0313 11:12:40.681550 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fsnm8" event={"ID":"0906c20c-c44b-4754-921c-3c934a52b11d","Type":"ContainerStarted","Data":"44c48239f8afe7d1a75a9e489ee3f562721f782dfe79deaa1fb3dfbfd9402fb0"} Mar 13 11:12:41.702246 master-0 kubenswrapper[33013]: I0313 11:12:41.702188 33013 generic.go:334] "Generic (PLEG): container finished" podID="0906c20c-c44b-4754-921c-3c934a52b11d" containerID="6c9a8a313191498fcd9b8150c1c9682d31c2866a831970c79db85060f9ff1a8e" exitCode=0 Mar 13 11:12:41.702246 master-0 kubenswrapper[33013]: I0313 11:12:41.702237 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fsnm8" event={"ID":"0906c20c-c44b-4754-921c-3c934a52b11d","Type":"ContainerDied","Data":"6c9a8a313191498fcd9b8150c1c9682d31c2866a831970c79db85060f9ff1a8e"} Mar 13 11:12:41.709636 master-0 kubenswrapper[33013]: I0313 11:12:41.709569 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"708c7673be44125da171abb1cd7e5c5dfa50e610555304227896e8fdf2ef1075"} Mar 13 11:12:41.709636 master-0 kubenswrapper[33013]: I0313 11:12:41.709635 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"7b635be423cb8b19bdcf130c7ace06506c53571334ad1fa8dbd2b8c4cf93bd06"} Mar 13 11:12:42.091127 master-0 kubenswrapper[33013]: I0313 11:12:42.091075 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:42.236485 master-0 kubenswrapper[33013]: I0313 11:12:42.235860 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ee87cc1e-7cc5-4619-8672-7f79b2db790b-additional-scripts\") pod \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " Mar 13 11:12:42.236485 master-0 kubenswrapper[33013]: I0313 11:12:42.236021 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee87cc1e-7cc5-4619-8672-7f79b2db790b-scripts\") pod \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " Mar 13 11:12:42.236485 master-0 kubenswrapper[33013]: I0313 11:12:42.236083 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-run\") pod \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " Mar 13 11:12:42.236485 master-0 kubenswrapper[33013]: I0313 11:12:42.236184 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-run-ovn\") pod \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " Mar 13 11:12:42.236485 master-0 kubenswrapper[33013]: I0313 11:12:42.236225 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-log-ovn\") pod \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " Mar 13 11:12:42.236485 master-0 kubenswrapper[33013]: I0313 11:12:42.236337 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ee87cc1e-7cc5-4619-8672-7f79b2db790b" (UID: "ee87cc1e-7cc5-4619-8672-7f79b2db790b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:12:42.236485 master-0 kubenswrapper[33013]: I0313 11:12:42.236369 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-run" (OuterVolumeSpecName: "var-run") pod "ee87cc1e-7cc5-4619-8672-7f79b2db790b" (UID: "ee87cc1e-7cc5-4619-8672-7f79b2db790b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:12:42.236485 master-0 kubenswrapper[33013]: I0313 11:12:42.236388 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ee87cc1e-7cc5-4619-8672-7f79b2db790b" (UID: "ee87cc1e-7cc5-4619-8672-7f79b2db790b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:12:42.236485 master-0 kubenswrapper[33013]: I0313 11:12:42.236481 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46mqj\" (UniqueName: \"kubernetes.io/projected/ee87cc1e-7cc5-4619-8672-7f79b2db790b-kube-api-access-46mqj\") pod \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\" (UID: \"ee87cc1e-7cc5-4619-8672-7f79b2db790b\") " Mar 13 11:12:42.237111 master-0 kubenswrapper[33013]: I0313 11:12:42.236945 33013 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:42.237111 master-0 kubenswrapper[33013]: I0313 11:12:42.236958 33013 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:42.237111 master-0 kubenswrapper[33013]: I0313 11:12:42.236968 33013 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ee87cc1e-7cc5-4619-8672-7f79b2db790b-var-run\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:42.237485 master-0 kubenswrapper[33013]: I0313 11:12:42.237417 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee87cc1e-7cc5-4619-8672-7f79b2db790b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ee87cc1e-7cc5-4619-8672-7f79b2db790b" (UID: "ee87cc1e-7cc5-4619-8672-7f79b2db790b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:42.237757 master-0 kubenswrapper[33013]: I0313 11:12:42.237663 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee87cc1e-7cc5-4619-8672-7f79b2db790b-scripts" (OuterVolumeSpecName: "scripts") pod "ee87cc1e-7cc5-4619-8672-7f79b2db790b" (UID: "ee87cc1e-7cc5-4619-8672-7f79b2db790b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:42.247915 master-0 kubenswrapper[33013]: I0313 11:12:42.247854 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee87cc1e-7cc5-4619-8672-7f79b2db790b-kube-api-access-46mqj" (OuterVolumeSpecName: "kube-api-access-46mqj") pod "ee87cc1e-7cc5-4619-8672-7f79b2db790b" (UID: "ee87cc1e-7cc5-4619-8672-7f79b2db790b"). InnerVolumeSpecName "kube-api-access-46mqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:42.338781 master-0 kubenswrapper[33013]: I0313 11:12:42.338673 33013 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ee87cc1e-7cc5-4619-8672-7f79b2db790b-additional-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:42.338781 master-0 kubenswrapper[33013]: I0313 11:12:42.338734 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee87cc1e-7cc5-4619-8672-7f79b2db790b-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:42.338781 master-0 kubenswrapper[33013]: I0313 11:12:42.338748 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46mqj\" (UniqueName: \"kubernetes.io/projected/ee87cc1e-7cc5-4619-8672-7f79b2db790b-kube-api-access-46mqj\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:42.731554 master-0 kubenswrapper[33013]: I0313 11:12:42.731444 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gvr6s-config-lcn4z" Mar 13 11:12:42.744277 master-0 kubenswrapper[33013]: I0313 11:12:42.744240 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gvr6s-config-lcn4z" event={"ID":"ee87cc1e-7cc5-4619-8672-7f79b2db790b","Type":"ContainerDied","Data":"992bbf20f1e051da7192e4c9e988083187afc38ff3c2dc9188576956556d477d"} Mar 13 11:12:42.744515 master-0 kubenswrapper[33013]: I0313 11:12:42.744501 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="992bbf20f1e051da7192e4c9e988083187afc38ff3c2dc9188576956556d477d" Mar 13 11:12:43.217438 master-0 kubenswrapper[33013]: I0313 11:12:43.217280 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gvr6s" Mar 13 11:12:43.559428 master-0 kubenswrapper[33013]: I0313 11:12:43.559346 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gvr6s-config-lcn4z"] Mar 13 11:12:43.609878 master-0 kubenswrapper[33013]: I0313 11:12:43.609812 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gvr6s-config-lcn4z"] Mar 13 11:12:44.729881 master-0 kubenswrapper[33013]: I0313 11:12:44.729819 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee87cc1e-7cc5-4619-8672-7f79b2db790b" path="/var/lib/kubelet/pods/ee87cc1e-7cc5-4619-8672-7f79b2db790b/volumes" Mar 13 11:12:50.773895 master-0 kubenswrapper[33013]: I0313 11:12:50.773842 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fsnm8" Mar 13 11:12:50.874133 master-0 kubenswrapper[33013]: I0313 11:12:50.873572 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fsnm8" event={"ID":"0906c20c-c44b-4754-921c-3c934a52b11d","Type":"ContainerDied","Data":"44c48239f8afe7d1a75a9e489ee3f562721f782dfe79deaa1fb3dfbfd9402fb0"} Mar 13 11:12:50.874133 master-0 kubenswrapper[33013]: I0313 11:12:50.873650 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44c48239f8afe7d1a75a9e489ee3f562721f782dfe79deaa1fb3dfbfd9402fb0" Mar 13 11:12:50.874133 master-0 kubenswrapper[33013]: I0313 11:12:50.873687 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fsnm8" Mar 13 11:12:50.972164 master-0 kubenswrapper[33013]: I0313 11:12:50.972107 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0906c20c-c44b-4754-921c-3c934a52b11d-operator-scripts\") pod \"0906c20c-c44b-4754-921c-3c934a52b11d\" (UID: \"0906c20c-c44b-4754-921c-3c934a52b11d\") " Mar 13 11:12:50.972303 master-0 kubenswrapper[33013]: I0313 11:12:50.972175 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm65d\" (UniqueName: \"kubernetes.io/projected/0906c20c-c44b-4754-921c-3c934a52b11d-kube-api-access-hm65d\") pod \"0906c20c-c44b-4754-921c-3c934a52b11d\" (UID: \"0906c20c-c44b-4754-921c-3c934a52b11d\") " Mar 13 11:12:50.973573 master-0 kubenswrapper[33013]: I0313 11:12:50.973532 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0906c20c-c44b-4754-921c-3c934a52b11d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0906c20c-c44b-4754-921c-3c934a52b11d" (UID: "0906c20c-c44b-4754-921c-3c934a52b11d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:50.987984 master-0 kubenswrapper[33013]: I0313 11:12:50.987910 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0906c20c-c44b-4754-921c-3c934a52b11d-kube-api-access-hm65d" (OuterVolumeSpecName: "kube-api-access-hm65d") pod "0906c20c-c44b-4754-921c-3c934a52b11d" (UID: "0906c20c-c44b-4754-921c-3c934a52b11d"). InnerVolumeSpecName "kube-api-access-hm65d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:51.076408 master-0 kubenswrapper[33013]: I0313 11:12:51.075711 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0906c20c-c44b-4754-921c-3c934a52b11d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:51.076408 master-0 kubenswrapper[33013]: I0313 11:12:51.075770 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm65d\" (UniqueName: \"kubernetes.io/projected/0906c20c-c44b-4754-921c-3c934a52b11d-kube-api-access-hm65d\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:51.885059 master-0 kubenswrapper[33013]: I0313 11:12:51.884991 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-69lvv" event={"ID":"ec43ecb9-e354-475a-aa0e-4dbe06716927","Type":"ContainerStarted","Data":"14db1c6a5dc645fba7b9f6fec826d2c7b5cd75b9acd71c3e98e311b0284c699e"} Mar 13 11:12:51.894217 master-0 kubenswrapper[33013]: I0313 11:12:51.894155 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"164afcad753295e6e5bf88d8ce3dc49f041c21c27486bd43c256752dccd1acc2"} Mar 13 11:12:51.894217 master-0 kubenswrapper[33013]: I0313 11:12:51.894212 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"07eb17899b2355c246d432a41dd8d3ac13d73118b9d861ae1da8b78e394ebddb"} Mar 13 11:12:51.894217 master-0 kubenswrapper[33013]: I0313 11:12:51.894223 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"fca4e450984c445daa838aedef53e6ba2a6d39ce3d099086fa2a871c405aae1b"} Mar 13 11:12:51.894624 master-0 kubenswrapper[33013]: I0313 11:12:51.894233 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"b4790ff99d2469255eda637ed67a427d2abc68cf1e9e37064262b02cf2ce268a"} Mar 13 11:12:51.915986 master-0 kubenswrapper[33013]: I0313 11:12:51.915888 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-69lvv" podStartSLOduration=2.407061568 podStartE2EDuration="17.91586826s" podCreationTimestamp="2026-03-13 11:12:34 +0000 UTC" firstStartedPulling="2026-03-13 11:12:35.275504647 +0000 UTC m=+938.751457996" lastFinishedPulling="2026-03-13 11:12:50.784311339 +0000 UTC m=+954.260264688" observedRunningTime="2026-03-13 11:12:51.90983076 +0000 UTC m=+955.385784109" watchObservedRunningTime="2026-03-13 11:12:51.91586826 +0000 UTC m=+955.391821609" Mar 13 11:12:52.775974 master-0 kubenswrapper[33013]: I0313 11:12:52.775894 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 13 11:12:53.227857 master-0 kubenswrapper[33013]: I0313 11:12:53.226395 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ggtxn"] Mar 13 11:12:53.227857 master-0 kubenswrapper[33013]: E0313 11:12:53.227063 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0906c20c-c44b-4754-921c-3c934a52b11d" containerName="mariadb-account-create-update" Mar 13 11:12:53.227857 master-0 kubenswrapper[33013]: I0313 11:12:53.227082 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="0906c20c-c44b-4754-921c-3c934a52b11d" containerName="mariadb-account-create-update" Mar 13 11:12:53.227857 master-0 kubenswrapper[33013]: E0313 11:12:53.227116 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee87cc1e-7cc5-4619-8672-7f79b2db790b" containerName="ovn-config" Mar 13 11:12:53.227857 master-0 kubenswrapper[33013]: I0313 11:12:53.227124 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee87cc1e-7cc5-4619-8672-7f79b2db790b" containerName="ovn-config" Mar 13 11:12:53.227857 master-0 kubenswrapper[33013]: I0313 11:12:53.227374 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee87cc1e-7cc5-4619-8672-7f79b2db790b" containerName="ovn-config" Mar 13 11:12:53.227857 master-0 kubenswrapper[33013]: I0313 11:12:53.227413 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="0906c20c-c44b-4754-921c-3c934a52b11d" containerName="mariadb-account-create-update" Mar 13 11:12:53.228646 master-0 kubenswrapper[33013]: I0313 11:12:53.228094 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ggtxn" Mar 13 11:12:53.242320 master-0 kubenswrapper[33013]: I0313 11:12:53.242264 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ggtxn"] Mar 13 11:12:53.336694 master-0 kubenswrapper[33013]: I0313 11:12:53.335164 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25ceb46b-d857-4ddb-82c4-dbbb416ad706-operator-scripts\") pod \"cinder-db-create-ggtxn\" (UID: \"25ceb46b-d857-4ddb-82c4-dbbb416ad706\") " pod="openstack/cinder-db-create-ggtxn" Mar 13 11:12:53.336694 master-0 kubenswrapper[33013]: I0313 11:12:53.335403 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwd92\" (UniqueName: \"kubernetes.io/projected/25ceb46b-d857-4ddb-82c4-dbbb416ad706-kube-api-access-mwd92\") pod \"cinder-db-create-ggtxn\" (UID: \"25ceb46b-d857-4ddb-82c4-dbbb416ad706\") " pod="openstack/cinder-db-create-ggtxn" Mar 13 11:12:53.420891 master-0 kubenswrapper[33013]: I0313 11:12:53.420806 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8a81-account-create-update-mtcg5"] Mar 13 11:12:53.423365 master-0 kubenswrapper[33013]: I0313 11:12:53.423022 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8a81-account-create-update-mtcg5" Mar 13 11:12:53.427618 master-0 kubenswrapper[33013]: I0313 11:12:53.427240 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 13 11:12:53.437847 master-0 kubenswrapper[33013]: I0313 11:12:53.437324 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwd92\" (UniqueName: \"kubernetes.io/projected/25ceb46b-d857-4ddb-82c4-dbbb416ad706-kube-api-access-mwd92\") pod \"cinder-db-create-ggtxn\" (UID: \"25ceb46b-d857-4ddb-82c4-dbbb416ad706\") " pod="openstack/cinder-db-create-ggtxn" Mar 13 11:12:53.437847 master-0 kubenswrapper[33013]: I0313 11:12:53.437470 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25ceb46b-d857-4ddb-82c4-dbbb416ad706-operator-scripts\") pod \"cinder-db-create-ggtxn\" (UID: \"25ceb46b-d857-4ddb-82c4-dbbb416ad706\") " pod="openstack/cinder-db-create-ggtxn" Mar 13 11:12:53.438208 master-0 kubenswrapper[33013]: I0313 11:12:53.438159 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8a81-account-create-update-mtcg5"] Mar 13 11:12:53.438783 master-0 kubenswrapper[33013]: I0313 11:12:53.438756 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25ceb46b-d857-4ddb-82c4-dbbb416ad706-operator-scripts\") pod \"cinder-db-create-ggtxn\" (UID: \"25ceb46b-d857-4ddb-82c4-dbbb416ad706\") " pod="openstack/cinder-db-create-ggtxn" Mar 13 11:12:53.486103 master-0 kubenswrapper[33013]: I0313 11:12:53.476266 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwd92\" (UniqueName: \"kubernetes.io/projected/25ceb46b-d857-4ddb-82c4-dbbb416ad706-kube-api-access-mwd92\") pod \"cinder-db-create-ggtxn\" (UID: \"25ceb46b-d857-4ddb-82c4-dbbb416ad706\") " pod="openstack/cinder-db-create-ggtxn" Mar 13 11:12:53.543722 master-0 kubenswrapper[33013]: I0313 11:12:53.543626 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lstm9\" (UniqueName: \"kubernetes.io/projected/7529222b-1d6b-439e-8e73-023ecc18255a-kube-api-access-lstm9\") pod \"cinder-8a81-account-create-update-mtcg5\" (UID: \"7529222b-1d6b-439e-8e73-023ecc18255a\") " pod="openstack/cinder-8a81-account-create-update-mtcg5" Mar 13 11:12:53.543984 master-0 kubenswrapper[33013]: I0313 11:12:53.543880 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7529222b-1d6b-439e-8e73-023ecc18255a-operator-scripts\") pod \"cinder-8a81-account-create-update-mtcg5\" (UID: \"7529222b-1d6b-439e-8e73-023ecc18255a\") " pod="openstack/cinder-8a81-account-create-update-mtcg5" Mar 13 11:12:53.596851 master-0 kubenswrapper[33013]: I0313 11:12:53.596312 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ggtxn" Mar 13 11:12:53.615239 master-0 kubenswrapper[33013]: I0313 11:12:53.615173 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-t62cj"] Mar 13 11:12:53.617512 master-0 kubenswrapper[33013]: I0313 11:12:53.617473 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t62cj" Mar 13 11:12:53.638476 master-0 kubenswrapper[33013]: I0313 11:12:53.638404 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-t62cj"] Mar 13 11:12:53.651694 master-0 kubenswrapper[33013]: I0313 11:12:53.648622 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lstm9\" (UniqueName: \"kubernetes.io/projected/7529222b-1d6b-439e-8e73-023ecc18255a-kube-api-access-lstm9\") pod \"cinder-8a81-account-create-update-mtcg5\" (UID: \"7529222b-1d6b-439e-8e73-023ecc18255a\") " pod="openstack/cinder-8a81-account-create-update-mtcg5" Mar 13 11:12:53.651694 master-0 kubenswrapper[33013]: I0313 11:12:53.648758 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7529222b-1d6b-439e-8e73-023ecc18255a-operator-scripts\") pod \"cinder-8a81-account-create-update-mtcg5\" (UID: \"7529222b-1d6b-439e-8e73-023ecc18255a\") " pod="openstack/cinder-8a81-account-create-update-mtcg5" Mar 13 11:12:53.651694 master-0 kubenswrapper[33013]: I0313 11:12:53.651578 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7529222b-1d6b-439e-8e73-023ecc18255a-operator-scripts\") pod \"cinder-8a81-account-create-update-mtcg5\" (UID: \"7529222b-1d6b-439e-8e73-023ecc18255a\") " pod="openstack/cinder-8a81-account-create-update-mtcg5" Mar 13 11:12:53.695351 master-0 kubenswrapper[33013]: I0313 11:12:53.695294 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lstm9\" (UniqueName: \"kubernetes.io/projected/7529222b-1d6b-439e-8e73-023ecc18255a-kube-api-access-lstm9\") pod \"cinder-8a81-account-create-update-mtcg5\" (UID: \"7529222b-1d6b-439e-8e73-023ecc18255a\") " pod="openstack/cinder-8a81-account-create-update-mtcg5" Mar 13 11:12:53.742688 master-0 kubenswrapper[33013]: I0313 11:12:53.742426 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c988-account-create-update-tbhzn"] Mar 13 11:12:53.745183 master-0 kubenswrapper[33013]: I0313 11:12:53.745147 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c988-account-create-update-tbhzn" Mar 13 11:12:53.750784 master-0 kubenswrapper[33013]: I0313 11:12:53.750745 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrdg\" (UniqueName: \"kubernetes.io/projected/b449da8c-7bed-422e-bbf5-843c97f4b73b-kube-api-access-tvrdg\") pod \"neutron-db-create-t62cj\" (UID: \"b449da8c-7bed-422e-bbf5-843c97f4b73b\") " pod="openstack/neutron-db-create-t62cj" Mar 13 11:12:53.750886 master-0 kubenswrapper[33013]: I0313 11:12:53.750800 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b449da8c-7bed-422e-bbf5-843c97f4b73b-operator-scripts\") pod \"neutron-db-create-t62cj\" (UID: \"b449da8c-7bed-422e-bbf5-843c97f4b73b\") " pod="openstack/neutron-db-create-t62cj" Mar 13 11:12:53.765052 master-0 kubenswrapper[33013]: I0313 11:12:53.764982 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6zpjq"] Mar 13 11:12:53.765948 master-0 kubenswrapper[33013]: I0313 11:12:53.765899 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 13 11:12:53.768315 master-0 kubenswrapper[33013]: I0313 11:12:53.768277 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6zpjq" Mar 13 11:12:53.775233 master-0 kubenswrapper[33013]: I0313 11:12:53.773253 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 11:12:53.775233 master-0 kubenswrapper[33013]: I0313 11:12:53.773449 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 11:12:53.775233 master-0 kubenswrapper[33013]: I0313 11:12:53.773551 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 11:12:53.775849 master-0 kubenswrapper[33013]: I0313 11:12:53.775818 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c988-account-create-update-tbhzn"] Mar 13 11:12:53.793502 master-0 kubenswrapper[33013]: I0313 11:12:53.793449 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6zpjq"] Mar 13 11:12:53.866014 master-0 kubenswrapper[33013]: I0313 11:12:53.862364 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22471c80-7d02-4478-a2d4-4ae9e68cb328-operator-scripts\") pod \"neutron-c988-account-create-update-tbhzn\" (UID: \"22471c80-7d02-4478-a2d4-4ae9e68cb328\") " pod="openstack/neutron-c988-account-create-update-tbhzn" Mar 13 11:12:53.866014 master-0 kubenswrapper[33013]: I0313 11:12:53.862433 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7be7f77-638d-446e-b9a4-13195f124ca0-config-data\") pod \"keystone-db-sync-6zpjq\" (UID: \"d7be7f77-638d-446e-b9a4-13195f124ca0\") " pod="openstack/keystone-db-sync-6zpjq" Mar 13 11:12:53.866014 master-0 kubenswrapper[33013]: I0313 11:12:53.862605 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrj9l\" (UniqueName: \"kubernetes.io/projected/22471c80-7d02-4478-a2d4-4ae9e68cb328-kube-api-access-mrj9l\") pod \"neutron-c988-account-create-update-tbhzn\" (UID: \"22471c80-7d02-4478-a2d4-4ae9e68cb328\") " pod="openstack/neutron-c988-account-create-update-tbhzn" Mar 13 11:12:53.866014 master-0 kubenswrapper[33013]: I0313 11:12:53.862699 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvht2\" (UniqueName: \"kubernetes.io/projected/d7be7f77-638d-446e-b9a4-13195f124ca0-kube-api-access-tvht2\") pod \"keystone-db-sync-6zpjq\" (UID: \"d7be7f77-638d-446e-b9a4-13195f124ca0\") " pod="openstack/keystone-db-sync-6zpjq" Mar 13 11:12:53.866014 master-0 kubenswrapper[33013]: I0313 11:12:53.863095 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7be7f77-638d-446e-b9a4-13195f124ca0-combined-ca-bundle\") pod \"keystone-db-sync-6zpjq\" (UID: \"d7be7f77-638d-446e-b9a4-13195f124ca0\") " pod="openstack/keystone-db-sync-6zpjq" Mar 13 11:12:53.866014 master-0 kubenswrapper[33013]: I0313 11:12:53.863314 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvrdg\" (UniqueName: \"kubernetes.io/projected/b449da8c-7bed-422e-bbf5-843c97f4b73b-kube-api-access-tvrdg\") pod \"neutron-db-create-t62cj\" (UID: \"b449da8c-7bed-422e-bbf5-843c97f4b73b\") " pod="openstack/neutron-db-create-t62cj" Mar 13 11:12:53.866014 master-0 kubenswrapper[33013]: I0313 11:12:53.863448 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b449da8c-7bed-422e-bbf5-843c97f4b73b-operator-scripts\") pod \"neutron-db-create-t62cj\" (UID: \"b449da8c-7bed-422e-bbf5-843c97f4b73b\") " pod="openstack/neutron-db-create-t62cj" Mar 13 11:12:53.866014 master-0 kubenswrapper[33013]: I0313 11:12:53.865461 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b449da8c-7bed-422e-bbf5-843c97f4b73b-operator-scripts\") pod \"neutron-db-create-t62cj\" (UID: \"b449da8c-7bed-422e-bbf5-843c97f4b73b\") " pod="openstack/neutron-db-create-t62cj" Mar 13 11:12:53.884736 master-0 kubenswrapper[33013]: I0313 11:12:53.881200 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8a81-account-create-update-mtcg5" Mar 13 11:12:53.896557 master-0 kubenswrapper[33013]: I0313 11:12:53.895837 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvrdg\" (UniqueName: \"kubernetes.io/projected/b449da8c-7bed-422e-bbf5-843c97f4b73b-kube-api-access-tvrdg\") pod \"neutron-db-create-t62cj\" (UID: \"b449da8c-7bed-422e-bbf5-843c97f4b73b\") " pod="openstack/neutron-db-create-t62cj" Mar 13 11:12:53.967434 master-0 kubenswrapper[33013]: I0313 11:12:53.965796 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22471c80-7d02-4478-a2d4-4ae9e68cb328-operator-scripts\") pod \"neutron-c988-account-create-update-tbhzn\" (UID: \"22471c80-7d02-4478-a2d4-4ae9e68cb328\") " pod="openstack/neutron-c988-account-create-update-tbhzn" Mar 13 11:12:53.967434 master-0 kubenswrapper[33013]: I0313 11:12:53.965857 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7be7f77-638d-446e-b9a4-13195f124ca0-config-data\") pod \"keystone-db-sync-6zpjq\" (UID: \"d7be7f77-638d-446e-b9a4-13195f124ca0\") " pod="openstack/keystone-db-sync-6zpjq" Mar 13 11:12:53.967434 master-0 kubenswrapper[33013]: I0313 11:12:53.965891 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrj9l\" (UniqueName: \"kubernetes.io/projected/22471c80-7d02-4478-a2d4-4ae9e68cb328-kube-api-access-mrj9l\") pod \"neutron-c988-account-create-update-tbhzn\" (UID: \"22471c80-7d02-4478-a2d4-4ae9e68cb328\") " pod="openstack/neutron-c988-account-create-update-tbhzn" Mar 13 11:12:53.967434 master-0 kubenswrapper[33013]: I0313 11:12:53.965916 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvht2\" (UniqueName: \"kubernetes.io/projected/d7be7f77-638d-446e-b9a4-13195f124ca0-kube-api-access-tvht2\") pod \"keystone-db-sync-6zpjq\" (UID: \"d7be7f77-638d-446e-b9a4-13195f124ca0\") " pod="openstack/keystone-db-sync-6zpjq" Mar 13 11:12:53.967434 master-0 kubenswrapper[33013]: I0313 11:12:53.965968 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7be7f77-638d-446e-b9a4-13195f124ca0-combined-ca-bundle\") pod \"keystone-db-sync-6zpjq\" (UID: \"d7be7f77-638d-446e-b9a4-13195f124ca0\") " pod="openstack/keystone-db-sync-6zpjq" Mar 13 11:12:53.967434 master-0 kubenswrapper[33013]: I0313 11:12:53.967001 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22471c80-7d02-4478-a2d4-4ae9e68cb328-operator-scripts\") pod \"neutron-c988-account-create-update-tbhzn\" (UID: \"22471c80-7d02-4478-a2d4-4ae9e68cb328\") " pod="openstack/neutron-c988-account-create-update-tbhzn" Mar 13 11:12:53.971617 master-0 kubenswrapper[33013]: I0313 11:12:53.971558 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7be7f77-638d-446e-b9a4-13195f124ca0-config-data\") pod \"keystone-db-sync-6zpjq\" (UID: \"d7be7f77-638d-446e-b9a4-13195f124ca0\") " pod="openstack/keystone-db-sync-6zpjq" Mar 13 11:12:53.985685 master-0 kubenswrapper[33013]: I0313 11:12:53.985610 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7be7f77-638d-446e-b9a4-13195f124ca0-combined-ca-bundle\") pod \"keystone-db-sync-6zpjq\" (UID: \"d7be7f77-638d-446e-b9a4-13195f124ca0\") " pod="openstack/keystone-db-sync-6zpjq" Mar 13 11:12:53.986743 master-0 kubenswrapper[33013]: I0313 11:12:53.986172 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvht2\" (UniqueName: \"kubernetes.io/projected/d7be7f77-638d-446e-b9a4-13195f124ca0-kube-api-access-tvht2\") pod \"keystone-db-sync-6zpjq\" (UID: \"d7be7f77-638d-446e-b9a4-13195f124ca0\") " pod="openstack/keystone-db-sync-6zpjq" Mar 13 11:12:53.994709 master-0 kubenswrapper[33013]: I0313 11:12:53.994617 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrj9l\" (UniqueName: \"kubernetes.io/projected/22471c80-7d02-4478-a2d4-4ae9e68cb328-kube-api-access-mrj9l\") pod \"neutron-c988-account-create-update-tbhzn\" (UID: \"22471c80-7d02-4478-a2d4-4ae9e68cb328\") " pod="openstack/neutron-c988-account-create-update-tbhzn" Mar 13 11:12:54.116540 master-0 kubenswrapper[33013]: I0313 11:12:54.116457 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t62cj" Mar 13 11:12:54.175638 master-0 kubenswrapper[33013]: I0313 11:12:54.175108 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c988-account-create-update-tbhzn" Mar 13 11:12:54.200976 master-0 kubenswrapper[33013]: I0313 11:12:54.200504 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6zpjq" Mar 13 11:12:54.281670 master-0 kubenswrapper[33013]: I0313 11:12:54.280790 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ggtxn"] Mar 13 11:12:54.306737 master-0 kubenswrapper[33013]: W0313 11:12:54.306487 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25ceb46b_d857_4ddb_82c4_dbbb416ad706.slice/crio-ea1d25181c3db4eb1c4a85b5203c751f74ac7b0f9763ba05eca4837b4f07d51b WatchSource:0}: Error finding container ea1d25181c3db4eb1c4a85b5203c751f74ac7b0f9763ba05eca4837b4f07d51b: Status 404 returned error can't find the container with id ea1d25181c3db4eb1c4a85b5203c751f74ac7b0f9763ba05eca4837b4f07d51b Mar 13 11:12:54.474133 master-0 kubenswrapper[33013]: I0313 11:12:54.470858 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 13 11:12:54.504973 master-0 kubenswrapper[33013]: W0313 11:12:54.504927 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7529222b_1d6b_439e_8e73_023ecc18255a.slice/crio-7b0ef67e4c1f106bf37cb541b92a2cfb1f01c2fa91e4faf584becd1de7a73462 WatchSource:0}: Error finding container 7b0ef67e4c1f106bf37cb541b92a2cfb1f01c2fa91e4faf584becd1de7a73462: Status 404 returned error can't find the container with id 7b0ef67e4c1f106bf37cb541b92a2cfb1f01c2fa91e4faf584becd1de7a73462 Mar 13 11:12:54.532715 master-0 kubenswrapper[33013]: I0313 11:12:54.532647 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8a81-account-create-update-mtcg5"] Mar 13 11:12:54.795240 master-0 kubenswrapper[33013]: I0313 11:12:54.794857 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-t62cj"] Mar 13 11:12:54.929399 master-0 kubenswrapper[33013]: I0313 11:12:54.929331 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c988-account-create-update-tbhzn"] Mar 13 11:12:55.055036 master-0 kubenswrapper[33013]: I0313 11:12:55.054982 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t62cj" event={"ID":"b449da8c-7bed-422e-bbf5-843c97f4b73b","Type":"ContainerStarted","Data":"5bd325f3aebe2cc37ae19a5fb776d9939da553ff7f24efd58db3bbf298c7fe92"} Mar 13 11:12:55.061420 master-0 kubenswrapper[33013]: I0313 11:12:55.061386 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6zpjq"] Mar 13 11:12:55.067173 master-0 kubenswrapper[33013]: I0313 11:12:55.067109 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"5777bac786e332c169934f223d1d137b4c7b32d2a95e5b3a443d418290d7ce59"} Mar 13 11:12:55.067406 master-0 kubenswrapper[33013]: I0313 11:12:55.067388 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"eea29932d16b459bb6842878143783d028ee37e76a4e55c3e82157fbcf207dab"} Mar 13 11:12:55.067528 master-0 kubenswrapper[33013]: I0313 11:12:55.067477 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"376e0226172ff34dc7be3eed107f566d7c0f1ada35311c069cf0c042c9c0eb9a"} Mar 13 11:12:55.069979 master-0 kubenswrapper[33013]: I0313 11:12:55.069939 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ggtxn" event={"ID":"25ceb46b-d857-4ddb-82c4-dbbb416ad706","Type":"ContainerStarted","Data":"fecfeba4163b0c48e9b72f3cf6d67e455a7a60301fb55e404b66dc2579c87209"} Mar 13 11:12:55.070106 master-0 kubenswrapper[33013]: I0313 11:12:55.070092 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ggtxn" event={"ID":"25ceb46b-d857-4ddb-82c4-dbbb416ad706","Type":"ContainerStarted","Data":"ea1d25181c3db4eb1c4a85b5203c751f74ac7b0f9763ba05eca4837b4f07d51b"} Mar 13 11:12:55.074418 master-0 kubenswrapper[33013]: W0313 11:12:55.074392 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7be7f77_638d_446e_b9a4_13195f124ca0.slice/crio-a85721262e34e505f039ee0d27d2a385680f0f48675cf0ccbcd267892ea2fc83 WatchSource:0}: Error finding container a85721262e34e505f039ee0d27d2a385680f0f48675cf0ccbcd267892ea2fc83: Status 404 returned error can't find the container with id a85721262e34e505f039ee0d27d2a385680f0f48675cf0ccbcd267892ea2fc83 Mar 13 11:12:55.077422 master-0 kubenswrapper[33013]: I0313 11:12:55.077371 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8a81-account-create-update-mtcg5" event={"ID":"7529222b-1d6b-439e-8e73-023ecc18255a","Type":"ContainerStarted","Data":"eae752490a2297ebc0b179f885f7a0ffca02cda2ce9ec68d9d7c128df5a9fe8e"} Mar 13 11:12:55.077521 master-0 kubenswrapper[33013]: I0313 11:12:55.077431 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8a81-account-create-update-mtcg5" event={"ID":"7529222b-1d6b-439e-8e73-023ecc18255a","Type":"ContainerStarted","Data":"7b0ef67e4c1f106bf37cb541b92a2cfb1f01c2fa91e4faf584becd1de7a73462"} Mar 13 11:12:55.111029 master-0 kubenswrapper[33013]: I0313 11:12:55.108856 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-ggtxn" podStartSLOduration=2.108832959 podStartE2EDuration="2.108832959s" podCreationTimestamp="2026-03-13 11:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:12:55.101138252 +0000 UTC m=+958.577091601" watchObservedRunningTime="2026-03-13 11:12:55.108832959 +0000 UTC m=+958.584786308" Mar 13 11:12:55.139627 master-0 kubenswrapper[33013]: I0313 11:12:55.130726 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-8a81-account-create-update-mtcg5" podStartSLOduration=2.130707866 podStartE2EDuration="2.130707866s" podCreationTimestamp="2026-03-13 11:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:12:55.126994721 +0000 UTC m=+958.602948070" watchObservedRunningTime="2026-03-13 11:12:55.130707866 +0000 UTC m=+958.606661215" Mar 13 11:12:56.100342 master-0 kubenswrapper[33013]: I0313 11:12:56.100265 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6zpjq" event={"ID":"d7be7f77-638d-446e-b9a4-13195f124ca0","Type":"ContainerStarted","Data":"a85721262e34e505f039ee0d27d2a385680f0f48675cf0ccbcd267892ea2fc83"} Mar 13 11:12:56.113983 master-0 kubenswrapper[33013]: I0313 11:12:56.113760 33013 generic.go:334] "Generic (PLEG): container finished" podID="7529222b-1d6b-439e-8e73-023ecc18255a" containerID="eae752490a2297ebc0b179f885f7a0ffca02cda2ce9ec68d9d7c128df5a9fe8e" exitCode=0 Mar 13 11:12:56.114391 master-0 kubenswrapper[33013]: I0313 11:12:56.114355 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8a81-account-create-update-mtcg5" event={"ID":"7529222b-1d6b-439e-8e73-023ecc18255a","Type":"ContainerDied","Data":"eae752490a2297ebc0b179f885f7a0ffca02cda2ce9ec68d9d7c128df5a9fe8e"} Mar 13 11:12:56.119286 master-0 kubenswrapper[33013]: I0313 11:12:56.119236 33013 generic.go:334] "Generic (PLEG): container finished" podID="b449da8c-7bed-422e-bbf5-843c97f4b73b" containerID="03bd0c5745b80a7a19932a5392eb736e806d45816319e270036c62b8bfb2634a" exitCode=0 Mar 13 11:12:56.119566 master-0 kubenswrapper[33013]: I0313 11:12:56.119539 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t62cj" event={"ID":"b449da8c-7bed-422e-bbf5-843c97f4b73b","Type":"ContainerDied","Data":"03bd0c5745b80a7a19932a5392eb736e806d45816319e270036c62b8bfb2634a"} Mar 13 11:12:56.125302 master-0 kubenswrapper[33013]: I0313 11:12:56.125228 33013 generic.go:334] "Generic (PLEG): container finished" podID="22471c80-7d02-4478-a2d4-4ae9e68cb328" containerID="ae8dce7e3bc7efb355f3ec109360ce5165b206bdded212cfd693a71917ad2baa" exitCode=0 Mar 13 11:12:56.125501 master-0 kubenswrapper[33013]: I0313 11:12:56.125423 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c988-account-create-update-tbhzn" event={"ID":"22471c80-7d02-4478-a2d4-4ae9e68cb328","Type":"ContainerDied","Data":"ae8dce7e3bc7efb355f3ec109360ce5165b206bdded212cfd693a71917ad2baa"} Mar 13 11:12:56.125581 master-0 kubenswrapper[33013]: I0313 11:12:56.125518 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c988-account-create-update-tbhzn" event={"ID":"22471c80-7d02-4478-a2d4-4ae9e68cb328","Type":"ContainerStarted","Data":"bfe1802f936e7a75759d66b12f8418c2dd370af6705bcaed8fadfdbadb8e8f01"} Mar 13 11:12:56.157666 master-0 kubenswrapper[33013]: I0313 11:12:56.156733 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"29938d3ec30477acf231e079677e71e6fe134760dde001708215ecbe2574ebc2"} Mar 13 11:12:56.157666 master-0 kubenswrapper[33013]: I0313 11:12:56.156789 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"5aa5306525d3ece931b6a517377d5a143d2f1af7198f0d94881b121d1b7c7986"} Mar 13 11:12:56.157666 master-0 kubenswrapper[33013]: I0313 11:12:56.156799 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"34cfb31f8717bc1e0b4b02ecb26a3ed431aefaced925614f4d30bbe753976b8d"} Mar 13 11:12:56.161625 master-0 kubenswrapper[33013]: I0313 11:12:56.159834 33013 generic.go:334] "Generic (PLEG): container finished" podID="25ceb46b-d857-4ddb-82c4-dbbb416ad706" containerID="fecfeba4163b0c48e9b72f3cf6d67e455a7a60301fb55e404b66dc2579c87209" exitCode=0 Mar 13 11:12:56.161625 master-0 kubenswrapper[33013]: I0313 11:12:56.159879 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ggtxn" event={"ID":"25ceb46b-d857-4ddb-82c4-dbbb416ad706","Type":"ContainerDied","Data":"fecfeba4163b0c48e9b72f3cf6d67e455a7a60301fb55e404b66dc2579c87209"} Mar 13 11:12:57.924604 master-0 kubenswrapper[33013]: I0313 11:12:57.921096 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"303793a4-990d-4b5f-bb44-ff67b1985406","Type":"ContainerStarted","Data":"09b87c42561459c92837c8ba7cb33cb9b330509423367f160df09743ad964279"} Mar 13 11:12:57.958056 master-0 kubenswrapper[33013]: I0313 11:12:57.957287 33013 trace.go:236] Trace[923655753]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (13-Mar-2026 11:12:56.705) (total time: 1252ms): Mar 13 11:12:57.958056 master-0 kubenswrapper[33013]: Trace[923655753]: [1.252199694s] [1.252199694s] END Mar 13 11:12:58.104038 master-0 kubenswrapper[33013]: I0313 11:12:58.103459 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=23.455566602 podStartE2EDuration="39.103435215s" podCreationTimestamp="2026-03-13 11:12:19 +0000 UTC" firstStartedPulling="2026-03-13 11:12:37.827969671 +0000 UTC m=+941.303923010" lastFinishedPulling="2026-03-13 11:12:53.475838284 +0000 UTC m=+956.951791623" observedRunningTime="2026-03-13 11:12:58.0954797 +0000 UTC m=+961.571433049" watchObservedRunningTime="2026-03-13 11:12:58.103435215 +0000 UTC m=+961.579388564" Mar 13 11:12:58.462971 master-0 kubenswrapper[33013]: I0313 11:12:58.462904 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bd79cd5f-hfrhd"] Mar 13 11:12:58.475718 master-0 kubenswrapper[33013]: I0313 11:12:58.475645 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.481299 master-0 kubenswrapper[33013]: I0313 11:12:58.480612 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 13 11:12:58.492827 master-0 kubenswrapper[33013]: I0313 11:12:58.491180 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bd79cd5f-hfrhd"] Mar 13 11:12:58.627009 master-0 kubenswrapper[33013]: I0313 11:12:58.626814 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-ovsdbserver-sb\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.627009 master-0 kubenswrapper[33013]: I0313 11:12:58.626908 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-dns-swift-storage-0\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.627299 master-0 kubenswrapper[33013]: I0313 11:12:58.627028 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-dns-svc\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.627299 master-0 kubenswrapper[33013]: I0313 11:12:58.627182 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-ovsdbserver-nb\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.628032 master-0 kubenswrapper[33013]: I0313 11:12:58.627422 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-config\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.628032 master-0 kubenswrapper[33013]: I0313 11:12:58.627489 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8vw5\" (UniqueName: \"kubernetes.io/projected/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-kube-api-access-h8vw5\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.628157 master-0 kubenswrapper[33013]: I0313 11:12:58.628132 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8a81-account-create-update-mtcg5" Mar 13 11:12:58.730352 master-0 kubenswrapper[33013]: I0313 11:12:58.730027 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7529222b-1d6b-439e-8e73-023ecc18255a-operator-scripts\") pod \"7529222b-1d6b-439e-8e73-023ecc18255a\" (UID: \"7529222b-1d6b-439e-8e73-023ecc18255a\") " Mar 13 11:12:58.730352 master-0 kubenswrapper[33013]: I0313 11:12:58.730220 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lstm9\" (UniqueName: \"kubernetes.io/projected/7529222b-1d6b-439e-8e73-023ecc18255a-kube-api-access-lstm9\") pod \"7529222b-1d6b-439e-8e73-023ecc18255a\" (UID: \"7529222b-1d6b-439e-8e73-023ecc18255a\") " Mar 13 11:12:58.730692 master-0 kubenswrapper[33013]: I0313 11:12:58.730530 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-config\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.730692 master-0 kubenswrapper[33013]: I0313 11:12:58.730564 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8vw5\" (UniqueName: \"kubernetes.io/projected/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-kube-api-access-h8vw5\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.730692 master-0 kubenswrapper[33013]: I0313 11:12:58.730643 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-ovsdbserver-sb\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.730817 master-0 kubenswrapper[33013]: I0313 11:12:58.730728 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-dns-swift-storage-0\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.730817 master-0 kubenswrapper[33013]: I0313 11:12:58.730762 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-dns-svc\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.730817 master-0 kubenswrapper[33013]: I0313 11:12:58.730803 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-ovsdbserver-nb\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.732431 master-0 kubenswrapper[33013]: I0313 11:12:58.731849 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-ovsdbserver-nb\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.740320 master-0 kubenswrapper[33013]: I0313 11:12:58.740245 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7529222b-1d6b-439e-8e73-023ecc18255a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7529222b-1d6b-439e-8e73-023ecc18255a" (UID: "7529222b-1d6b-439e-8e73-023ecc18255a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:58.745662 master-0 kubenswrapper[33013]: I0313 11:12:58.745290 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7529222b-1d6b-439e-8e73-023ecc18255a-kube-api-access-lstm9" (OuterVolumeSpecName: "kube-api-access-lstm9") pod "7529222b-1d6b-439e-8e73-023ecc18255a" (UID: "7529222b-1d6b-439e-8e73-023ecc18255a"). InnerVolumeSpecName "kube-api-access-lstm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:58.746023 master-0 kubenswrapper[33013]: I0313 11:12:58.745997 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-config\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.746919 master-0 kubenswrapper[33013]: I0313 11:12:58.746879 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-ovsdbserver-sb\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.747429 master-0 kubenswrapper[33013]: I0313 11:12:58.747406 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-dns-swift-storage-0\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.748075 master-0 kubenswrapper[33013]: I0313 11:12:58.748048 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-dns-svc\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.791318 master-0 kubenswrapper[33013]: I0313 11:12:58.790799 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8vw5\" (UniqueName: \"kubernetes.io/projected/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-kube-api-access-h8vw5\") pod \"dnsmasq-dns-75bd79cd5f-hfrhd\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.832857 master-0 kubenswrapper[33013]: I0313 11:12:58.832792 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7529222b-1d6b-439e-8e73-023ecc18255a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:58.832857 master-0 kubenswrapper[33013]: I0313 11:12:58.832839 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lstm9\" (UniqueName: \"kubernetes.io/projected/7529222b-1d6b-439e-8e73-023ecc18255a-kube-api-access-lstm9\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:58.917395 master-0 kubenswrapper[33013]: I0313 11:12:58.917340 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:12:58.953761 master-0 kubenswrapper[33013]: I0313 11:12:58.953698 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c988-account-create-update-tbhzn" Mar 13 11:12:58.957575 master-0 kubenswrapper[33013]: I0313 11:12:58.957531 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8a81-account-create-update-mtcg5" Mar 13 11:12:58.957797 master-0 kubenswrapper[33013]: I0313 11:12:58.957753 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8a81-account-create-update-mtcg5" event={"ID":"7529222b-1d6b-439e-8e73-023ecc18255a","Type":"ContainerDied","Data":"7b0ef67e4c1f106bf37cb541b92a2cfb1f01c2fa91e4faf584becd1de7a73462"} Mar 13 11:12:58.957934 master-0 kubenswrapper[33013]: I0313 11:12:58.957913 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b0ef67e4c1f106bf37cb541b92a2cfb1f01c2fa91e4faf584becd1de7a73462" Mar 13 11:12:58.961087 master-0 kubenswrapper[33013]: I0313 11:12:58.961048 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t62cj" event={"ID":"b449da8c-7bed-422e-bbf5-843c97f4b73b","Type":"ContainerDied","Data":"5bd325f3aebe2cc37ae19a5fb776d9939da553ff7f24efd58db3bbf298c7fe92"} Mar 13 11:12:58.961172 master-0 kubenswrapper[33013]: I0313 11:12:58.961091 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bd325f3aebe2cc37ae19a5fb776d9939da553ff7f24efd58db3bbf298c7fe92" Mar 13 11:12:58.963820 master-0 kubenswrapper[33013]: I0313 11:12:58.963689 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t62cj" Mar 13 11:12:58.969267 master-0 kubenswrapper[33013]: I0313 11:12:58.969215 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c988-account-create-update-tbhzn" Mar 13 11:12:58.969402 master-0 kubenswrapper[33013]: I0313 11:12:58.969210 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c988-account-create-update-tbhzn" event={"ID":"22471c80-7d02-4478-a2d4-4ae9e68cb328","Type":"ContainerDied","Data":"bfe1802f936e7a75759d66b12f8418c2dd370af6705bcaed8fadfdbadb8e8f01"} Mar 13 11:12:58.969869 master-0 kubenswrapper[33013]: I0313 11:12:58.969381 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe1802f936e7a75759d66b12f8418c2dd370af6705bcaed8fadfdbadb8e8f01" Mar 13 11:12:58.971884 master-0 kubenswrapper[33013]: I0313 11:12:58.971824 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ggtxn" event={"ID":"25ceb46b-d857-4ddb-82c4-dbbb416ad706","Type":"ContainerDied","Data":"ea1d25181c3db4eb1c4a85b5203c751f74ac7b0f9763ba05eca4837b4f07d51b"} Mar 13 11:12:58.971966 master-0 kubenswrapper[33013]: I0313 11:12:58.971895 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea1d25181c3db4eb1c4a85b5203c751f74ac7b0f9763ba05eca4837b4f07d51b" Mar 13 11:12:58.994086 master-0 kubenswrapper[33013]: I0313 11:12:58.993989 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ggtxn" Mar 13 11:12:59.037459 master-0 kubenswrapper[33013]: I0313 11:12:59.036790 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrj9l\" (UniqueName: \"kubernetes.io/projected/22471c80-7d02-4478-a2d4-4ae9e68cb328-kube-api-access-mrj9l\") pod \"22471c80-7d02-4478-a2d4-4ae9e68cb328\" (UID: \"22471c80-7d02-4478-a2d4-4ae9e68cb328\") " Mar 13 11:12:59.037459 master-0 kubenswrapper[33013]: I0313 11:12:59.037028 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22471c80-7d02-4478-a2d4-4ae9e68cb328-operator-scripts\") pod \"22471c80-7d02-4478-a2d4-4ae9e68cb328\" (UID: \"22471c80-7d02-4478-a2d4-4ae9e68cb328\") " Mar 13 11:12:59.039074 master-0 kubenswrapper[33013]: I0313 11:12:59.039045 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22471c80-7d02-4478-a2d4-4ae9e68cb328-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22471c80-7d02-4478-a2d4-4ae9e68cb328" (UID: "22471c80-7d02-4478-a2d4-4ae9e68cb328"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:59.039530 master-0 kubenswrapper[33013]: I0313 11:12:59.039491 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22471c80-7d02-4478-a2d4-4ae9e68cb328-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:59.041914 master-0 kubenswrapper[33013]: I0313 11:12:59.041881 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22471c80-7d02-4478-a2d4-4ae9e68cb328-kube-api-access-mrj9l" (OuterVolumeSpecName: "kube-api-access-mrj9l") pod "22471c80-7d02-4478-a2d4-4ae9e68cb328" (UID: "22471c80-7d02-4478-a2d4-4ae9e68cb328"). InnerVolumeSpecName "kube-api-access-mrj9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:59.140729 master-0 kubenswrapper[33013]: I0313 11:12:59.140640 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvrdg\" (UniqueName: \"kubernetes.io/projected/b449da8c-7bed-422e-bbf5-843c97f4b73b-kube-api-access-tvrdg\") pod \"b449da8c-7bed-422e-bbf5-843c97f4b73b\" (UID: \"b449da8c-7bed-422e-bbf5-843c97f4b73b\") " Mar 13 11:12:59.140876 master-0 kubenswrapper[33013]: I0313 11:12:59.140778 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b449da8c-7bed-422e-bbf5-843c97f4b73b-operator-scripts\") pod \"b449da8c-7bed-422e-bbf5-843c97f4b73b\" (UID: \"b449da8c-7bed-422e-bbf5-843c97f4b73b\") " Mar 13 11:12:59.140913 master-0 kubenswrapper[33013]: I0313 11:12:59.140893 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwd92\" (UniqueName: \"kubernetes.io/projected/25ceb46b-d857-4ddb-82c4-dbbb416ad706-kube-api-access-mwd92\") pod \"25ceb46b-d857-4ddb-82c4-dbbb416ad706\" (UID: \"25ceb46b-d857-4ddb-82c4-dbbb416ad706\") " Mar 13 11:12:59.141013 master-0 kubenswrapper[33013]: I0313 11:12:59.140987 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25ceb46b-d857-4ddb-82c4-dbbb416ad706-operator-scripts\") pod \"25ceb46b-d857-4ddb-82c4-dbbb416ad706\" (UID: \"25ceb46b-d857-4ddb-82c4-dbbb416ad706\") " Mar 13 11:12:59.141570 master-0 kubenswrapper[33013]: I0313 11:12:59.141547 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrj9l\" (UniqueName: \"kubernetes.io/projected/22471c80-7d02-4478-a2d4-4ae9e68cb328-kube-api-access-mrj9l\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:59.142291 master-0 kubenswrapper[33013]: I0313 11:12:59.142250 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b449da8c-7bed-422e-bbf5-843c97f4b73b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b449da8c-7bed-422e-bbf5-843c97f4b73b" (UID: "b449da8c-7bed-422e-bbf5-843c97f4b73b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:59.142853 master-0 kubenswrapper[33013]: I0313 11:12:59.142821 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ceb46b-d857-4ddb-82c4-dbbb416ad706-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25ceb46b-d857-4ddb-82c4-dbbb416ad706" (UID: "25ceb46b-d857-4ddb-82c4-dbbb416ad706"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:12:59.147693 master-0 kubenswrapper[33013]: I0313 11:12:59.145043 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b449da8c-7bed-422e-bbf5-843c97f4b73b-kube-api-access-tvrdg" (OuterVolumeSpecName: "kube-api-access-tvrdg") pod "b449da8c-7bed-422e-bbf5-843c97f4b73b" (UID: "b449da8c-7bed-422e-bbf5-843c97f4b73b"). InnerVolumeSpecName "kube-api-access-tvrdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:59.148280 master-0 kubenswrapper[33013]: I0313 11:12:59.147899 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ceb46b-d857-4ddb-82c4-dbbb416ad706-kube-api-access-mwd92" (OuterVolumeSpecName: "kube-api-access-mwd92") pod "25ceb46b-d857-4ddb-82c4-dbbb416ad706" (UID: "25ceb46b-d857-4ddb-82c4-dbbb416ad706"). InnerVolumeSpecName "kube-api-access-mwd92". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:12:59.249566 master-0 kubenswrapper[33013]: I0313 11:12:59.247586 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvrdg\" (UniqueName: \"kubernetes.io/projected/b449da8c-7bed-422e-bbf5-843c97f4b73b-kube-api-access-tvrdg\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:59.249566 master-0 kubenswrapper[33013]: I0313 11:12:59.247915 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b449da8c-7bed-422e-bbf5-843c97f4b73b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:59.249566 master-0 kubenswrapper[33013]: I0313 11:12:59.247983 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwd92\" (UniqueName: \"kubernetes.io/projected/25ceb46b-d857-4ddb-82c4-dbbb416ad706-kube-api-access-mwd92\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:59.249566 master-0 kubenswrapper[33013]: I0313 11:12:59.247997 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25ceb46b-d857-4ddb-82c4-dbbb416ad706-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:12:59.450676 master-0 kubenswrapper[33013]: W0313 11:12:59.448557 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07c1fb4b_6e38_4be9_aeac_dbbc884ec898.slice/crio-8a5553b3d2b425fc0ee1994da71203d2740d3812e07cccb7c917edf418798449 WatchSource:0}: Error finding container 8a5553b3d2b425fc0ee1994da71203d2740d3812e07cccb7c917edf418798449: Status 404 returned error can't find the container with id 8a5553b3d2b425fc0ee1994da71203d2740d3812e07cccb7c917edf418798449 Mar 13 11:12:59.467166 master-0 kubenswrapper[33013]: I0313 11:12:59.466904 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bd79cd5f-hfrhd"] Mar 13 11:13:00.010483 master-0 kubenswrapper[33013]: I0313 11:13:00.010415 33013 generic.go:334] "Generic (PLEG): container finished" podID="07c1fb4b-6e38-4be9-aeac-dbbc884ec898" containerID="062b44d3234fbefd2d0811bb5ab3a7f77d252646d2e6f95408c389dbcc5e027d" exitCode=0 Mar 13 11:13:00.011114 master-0 kubenswrapper[33013]: I0313 11:13:00.010534 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t62cj" Mar 13 11:13:00.013446 master-0 kubenswrapper[33013]: I0313 11:13:00.013363 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" event={"ID":"07c1fb4b-6e38-4be9-aeac-dbbc884ec898","Type":"ContainerDied","Data":"062b44d3234fbefd2d0811bb5ab3a7f77d252646d2e6f95408c389dbcc5e027d"} Mar 13 11:13:00.013446 master-0 kubenswrapper[33013]: I0313 11:13:00.013446 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" event={"ID":"07c1fb4b-6e38-4be9-aeac-dbbc884ec898","Type":"ContainerStarted","Data":"8a5553b3d2b425fc0ee1994da71203d2740d3812e07cccb7c917edf418798449"} Mar 13 11:13:00.013779 master-0 kubenswrapper[33013]: I0313 11:13:00.013535 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ggtxn" Mar 13 11:13:03.077547 master-0 kubenswrapper[33013]: I0313 11:13:03.077494 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" event={"ID":"07c1fb4b-6e38-4be9-aeac-dbbc884ec898","Type":"ContainerStarted","Data":"c5680862838591842365c07d4062114c88e8de9c22d1d02155379918bb336b93"} Mar 13 11:13:03.078308 master-0 kubenswrapper[33013]: I0313 11:13:03.078276 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:13:03.080514 master-0 kubenswrapper[33013]: I0313 11:13:03.080491 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6zpjq" event={"ID":"d7be7f77-638d-446e-b9a4-13195f124ca0","Type":"ContainerStarted","Data":"202ff16ce6cc503330a4aa39c9d938d3c4e72a43d474a9fa2922a928c2fc455e"} Mar 13 11:13:03.105348 master-0 kubenswrapper[33013]: I0313 11:13:03.105236 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" podStartSLOduration=5.105212971 podStartE2EDuration="5.105212971s" podCreationTimestamp="2026-03-13 11:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:03.09810738 +0000 UTC m=+966.574060729" watchObservedRunningTime="2026-03-13 11:13:03.105212971 +0000 UTC m=+966.581166320" Mar 13 11:13:03.127827 master-0 kubenswrapper[33013]: I0313 11:13:03.127649 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6zpjq" podStartSLOduration=2.677766547 podStartE2EDuration="10.127627023s" podCreationTimestamp="2026-03-13 11:12:53 +0000 UTC" firstStartedPulling="2026-03-13 11:12:55.078174034 +0000 UTC m=+958.554127383" lastFinishedPulling="2026-03-13 11:13:02.52803451 +0000 UTC m=+966.003987859" observedRunningTime="2026-03-13 11:13:03.120110541 +0000 UTC m=+966.596063890" watchObservedRunningTime="2026-03-13 11:13:03.127627023 +0000 UTC m=+966.603580372" Mar 13 11:13:04.094102 master-0 kubenswrapper[33013]: I0313 11:13:04.093256 33013 generic.go:334] "Generic (PLEG): container finished" podID="ec43ecb9-e354-475a-aa0e-4dbe06716927" containerID="14db1c6a5dc645fba7b9f6fec826d2c7b5cd75b9acd71c3e98e311b0284c699e" exitCode=0 Mar 13 11:13:04.094102 master-0 kubenswrapper[33013]: I0313 11:13:04.093342 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-69lvv" event={"ID":"ec43ecb9-e354-475a-aa0e-4dbe06716927","Type":"ContainerDied","Data":"14db1c6a5dc645fba7b9f6fec826d2c7b5cd75b9acd71c3e98e311b0284c699e"} Mar 13 11:13:05.791307 master-0 kubenswrapper[33013]: I0313 11:13:05.791239 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-69lvv" Mar 13 11:13:05.918264 master-0 kubenswrapper[33013]: I0313 11:13:05.918199 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s99r\" (UniqueName: \"kubernetes.io/projected/ec43ecb9-e354-475a-aa0e-4dbe06716927-kube-api-access-9s99r\") pod \"ec43ecb9-e354-475a-aa0e-4dbe06716927\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " Mar 13 11:13:05.918616 master-0 kubenswrapper[33013]: I0313 11:13:05.918564 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-config-data\") pod \"ec43ecb9-e354-475a-aa0e-4dbe06716927\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " Mar 13 11:13:05.918675 master-0 kubenswrapper[33013]: I0313 11:13:05.918655 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-combined-ca-bundle\") pod \"ec43ecb9-e354-475a-aa0e-4dbe06716927\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " Mar 13 11:13:05.918751 master-0 kubenswrapper[33013]: I0313 11:13:05.918715 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-db-sync-config-data\") pod \"ec43ecb9-e354-475a-aa0e-4dbe06716927\" (UID: \"ec43ecb9-e354-475a-aa0e-4dbe06716927\") " Mar 13 11:13:05.922495 master-0 kubenswrapper[33013]: I0313 11:13:05.922360 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ec43ecb9-e354-475a-aa0e-4dbe06716927" (UID: "ec43ecb9-e354-475a-aa0e-4dbe06716927"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:05.923203 master-0 kubenswrapper[33013]: I0313 11:13:05.923075 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec43ecb9-e354-475a-aa0e-4dbe06716927-kube-api-access-9s99r" (OuterVolumeSpecName: "kube-api-access-9s99r") pod "ec43ecb9-e354-475a-aa0e-4dbe06716927" (UID: "ec43ecb9-e354-475a-aa0e-4dbe06716927"). InnerVolumeSpecName "kube-api-access-9s99r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:05.946422 master-0 kubenswrapper[33013]: I0313 11:13:05.946346 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec43ecb9-e354-475a-aa0e-4dbe06716927" (UID: "ec43ecb9-e354-475a-aa0e-4dbe06716927"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:05.985335 master-0 kubenswrapper[33013]: I0313 11:13:05.981577 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-config-data" (OuterVolumeSpecName: "config-data") pod "ec43ecb9-e354-475a-aa0e-4dbe06716927" (UID: "ec43ecb9-e354-475a-aa0e-4dbe06716927"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:06.021740 master-0 kubenswrapper[33013]: I0313 11:13:06.021673 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s99r\" (UniqueName: \"kubernetes.io/projected/ec43ecb9-e354-475a-aa0e-4dbe06716927-kube-api-access-9s99r\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:06.021740 master-0 kubenswrapper[33013]: I0313 11:13:06.021727 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:06.021740 master-0 kubenswrapper[33013]: I0313 11:13:06.021740 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:06.021983 master-0 kubenswrapper[33013]: I0313 11:13:06.021753 33013 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec43ecb9-e354-475a-aa0e-4dbe06716927-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:06.118017 master-0 kubenswrapper[33013]: I0313 11:13:06.117938 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-69lvv" event={"ID":"ec43ecb9-e354-475a-aa0e-4dbe06716927","Type":"ContainerDied","Data":"96c280857d76d9635a70e14a4b1ffef9871778d4985df5b8e7d29b58b70f3fad"} Mar 13 11:13:06.118017 master-0 kubenswrapper[33013]: I0313 11:13:06.118003 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-69lvv" Mar 13 11:13:06.118353 master-0 kubenswrapper[33013]: I0313 11:13:06.118012 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96c280857d76d9635a70e14a4b1ffef9871778d4985df5b8e7d29b58b70f3fad" Mar 13 11:13:06.842221 master-0 kubenswrapper[33013]: I0313 11:13:06.840630 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bd79cd5f-hfrhd"] Mar 13 11:13:06.842221 master-0 kubenswrapper[33013]: I0313 11:13:06.841040 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" podUID="07c1fb4b-6e38-4be9-aeac-dbbc884ec898" containerName="dnsmasq-dns" containerID="cri-o://c5680862838591842365c07d4062114c88e8de9c22d1d02155379918bb336b93" gracePeriod=10 Mar 13 11:13:06.857090 master-0 kubenswrapper[33013]: I0313 11:13:06.857012 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68c5dd5fdf-lff8l"] Mar 13 11:13:06.858143 master-0 kubenswrapper[33013]: E0313 11:13:06.858044 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22471c80-7d02-4478-a2d4-4ae9e68cb328" containerName="mariadb-account-create-update" Mar 13 11:13:06.858143 master-0 kubenswrapper[33013]: I0313 11:13:06.858140 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="22471c80-7d02-4478-a2d4-4ae9e68cb328" containerName="mariadb-account-create-update" Mar 13 11:13:06.858258 master-0 kubenswrapper[33013]: E0313 11:13:06.858184 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b449da8c-7bed-422e-bbf5-843c97f4b73b" containerName="mariadb-database-create" Mar 13 11:13:06.858258 master-0 kubenswrapper[33013]: I0313 11:13:06.858195 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="b449da8c-7bed-422e-bbf5-843c97f4b73b" containerName="mariadb-database-create" Mar 13 11:13:06.858258 master-0 kubenswrapper[33013]: E0313 11:13:06.858216 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec43ecb9-e354-475a-aa0e-4dbe06716927" containerName="glance-db-sync" Mar 13 11:13:06.858258 master-0 kubenswrapper[33013]: I0313 11:13:06.858224 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec43ecb9-e354-475a-aa0e-4dbe06716927" containerName="glance-db-sync" Mar 13 11:13:06.858258 master-0 kubenswrapper[33013]: E0313 11:13:06.858258 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ceb46b-d857-4ddb-82c4-dbbb416ad706" containerName="mariadb-database-create" Mar 13 11:13:06.858414 master-0 kubenswrapper[33013]: I0313 11:13:06.858266 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ceb46b-d857-4ddb-82c4-dbbb416ad706" containerName="mariadb-database-create" Mar 13 11:13:06.858414 master-0 kubenswrapper[33013]: E0313 11:13:06.858283 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7529222b-1d6b-439e-8e73-023ecc18255a" containerName="mariadb-account-create-update" Mar 13 11:13:06.858414 master-0 kubenswrapper[33013]: I0313 11:13:06.858290 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="7529222b-1d6b-439e-8e73-023ecc18255a" containerName="mariadb-account-create-update" Mar 13 11:13:06.858648 master-0 kubenswrapper[33013]: I0313 11:13:06.858615 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="b449da8c-7bed-422e-bbf5-843c97f4b73b" containerName="mariadb-database-create" Mar 13 11:13:06.859465 master-0 kubenswrapper[33013]: I0313 11:13:06.859431 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ceb46b-d857-4ddb-82c4-dbbb416ad706" containerName="mariadb-database-create" Mar 13 11:13:06.859516 master-0 kubenswrapper[33013]: I0313 11:13:06.859471 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="7529222b-1d6b-439e-8e73-023ecc18255a" containerName="mariadb-account-create-update" Mar 13 11:13:06.859516 master-0 kubenswrapper[33013]: I0313 11:13:06.859493 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="22471c80-7d02-4478-a2d4-4ae9e68cb328" containerName="mariadb-account-create-update" Mar 13 11:13:06.859621 master-0 kubenswrapper[33013]: I0313 11:13:06.859547 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec43ecb9-e354-475a-aa0e-4dbe06716927" containerName="glance-db-sync" Mar 13 11:13:06.861730 master-0 kubenswrapper[33013]: I0313 11:13:06.861448 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:06.874276 master-0 kubenswrapper[33013]: I0313 11:13:06.870032 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68c5dd5fdf-lff8l"] Mar 13 11:13:06.964208 master-0 kubenswrapper[33013]: I0313 11:13:06.964160 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-dns-svc\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:06.964390 master-0 kubenswrapper[33013]: I0313 11:13:06.964368 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-dns-swift-storage-0\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:06.964565 master-0 kubenswrapper[33013]: I0313 11:13:06.964549 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrm2s\" (UniqueName: \"kubernetes.io/projected/786d0394-1427-40dd-a9c8-231d5bc3dde3-kube-api-access-wrm2s\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:06.964929 master-0 kubenswrapper[33013]: I0313 11:13:06.964909 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-ovsdbserver-sb\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:06.965038 master-0 kubenswrapper[33013]: I0313 11:13:06.965024 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-ovsdbserver-nb\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:06.965122 master-0 kubenswrapper[33013]: I0313 11:13:06.965108 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-config\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:07.067475 master-0 kubenswrapper[33013]: I0313 11:13:07.067361 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrm2s\" (UniqueName: \"kubernetes.io/projected/786d0394-1427-40dd-a9c8-231d5bc3dde3-kube-api-access-wrm2s\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:07.067916 master-0 kubenswrapper[33013]: I0313 11:13:07.067576 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-ovsdbserver-sb\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:07.067916 master-0 kubenswrapper[33013]: I0313 11:13:07.067689 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-ovsdbserver-nb\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:07.067916 master-0 kubenswrapper[33013]: I0313 11:13:07.067732 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-config\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:07.067916 master-0 kubenswrapper[33013]: I0313 11:13:07.067886 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-dns-svc\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:07.068077 master-0 kubenswrapper[33013]: I0313 11:13:07.068002 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-dns-swift-storage-0\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:07.069314 master-0 kubenswrapper[33013]: I0313 11:13:07.068687 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-ovsdbserver-sb\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:07.069314 master-0 kubenswrapper[33013]: I0313 11:13:07.068839 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-dns-swift-storage-0\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:07.069314 master-0 kubenswrapper[33013]: I0313 11:13:07.069260 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-config\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:07.069314 master-0 kubenswrapper[33013]: I0313 11:13:07.069308 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-ovsdbserver-nb\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:07.069682 master-0 kubenswrapper[33013]: I0313 11:13:07.069396 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-dns-svc\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:07.100126 master-0 kubenswrapper[33013]: I0313 11:13:07.099610 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrm2s\" (UniqueName: \"kubernetes.io/projected/786d0394-1427-40dd-a9c8-231d5bc3dde3-kube-api-access-wrm2s\") pod \"dnsmasq-dns-68c5dd5fdf-lff8l\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:07.140814 master-0 kubenswrapper[33013]: I0313 11:13:07.140714 33013 generic.go:334] "Generic (PLEG): container finished" podID="07c1fb4b-6e38-4be9-aeac-dbbc884ec898" containerID="c5680862838591842365c07d4062114c88e8de9c22d1d02155379918bb336b93" exitCode=0 Mar 13 11:13:07.141118 master-0 kubenswrapper[33013]: I0313 11:13:07.141074 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" event={"ID":"07c1fb4b-6e38-4be9-aeac-dbbc884ec898","Type":"ContainerDied","Data":"c5680862838591842365c07d4062114c88e8de9c22d1d02155379918bb336b93"} Mar 13 11:13:07.299326 master-0 kubenswrapper[33013]: I0313 11:13:07.299274 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:07.400677 master-0 kubenswrapper[33013]: I0313 11:13:07.400572 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:13:07.477643 master-0 kubenswrapper[33013]: I0313 11:13:07.477045 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-dns-svc\") pod \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " Mar 13 11:13:07.477643 master-0 kubenswrapper[33013]: I0313 11:13:07.477151 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-dns-swift-storage-0\") pod \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " Mar 13 11:13:07.477643 master-0 kubenswrapper[33013]: I0313 11:13:07.477206 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-ovsdbserver-sb\") pod \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " Mar 13 11:13:07.477643 master-0 kubenswrapper[33013]: I0313 11:13:07.477276 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-ovsdbserver-nb\") pod \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " Mar 13 11:13:07.477643 master-0 kubenswrapper[33013]: I0313 11:13:07.477327 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8vw5\" (UniqueName: \"kubernetes.io/projected/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-kube-api-access-h8vw5\") pod \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " Mar 13 11:13:07.477643 master-0 kubenswrapper[33013]: I0313 11:13:07.477409 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-config\") pod \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\" (UID: \"07c1fb4b-6e38-4be9-aeac-dbbc884ec898\") " Mar 13 11:13:07.484480 master-0 kubenswrapper[33013]: I0313 11:13:07.484148 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-kube-api-access-h8vw5" (OuterVolumeSpecName: "kube-api-access-h8vw5") pod "07c1fb4b-6e38-4be9-aeac-dbbc884ec898" (UID: "07c1fb4b-6e38-4be9-aeac-dbbc884ec898"). InnerVolumeSpecName "kube-api-access-h8vw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:07.543373 master-0 kubenswrapper[33013]: I0313 11:13:07.542232 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "07c1fb4b-6e38-4be9-aeac-dbbc884ec898" (UID: "07c1fb4b-6e38-4be9-aeac-dbbc884ec898"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:07.551245 master-0 kubenswrapper[33013]: I0313 11:13:07.551085 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "07c1fb4b-6e38-4be9-aeac-dbbc884ec898" (UID: "07c1fb4b-6e38-4be9-aeac-dbbc884ec898"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:07.552117 master-0 kubenswrapper[33013]: I0313 11:13:07.552087 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "07c1fb4b-6e38-4be9-aeac-dbbc884ec898" (UID: "07c1fb4b-6e38-4be9-aeac-dbbc884ec898"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:07.557181 master-0 kubenswrapper[33013]: I0313 11:13:07.557152 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "07c1fb4b-6e38-4be9-aeac-dbbc884ec898" (UID: "07c1fb4b-6e38-4be9-aeac-dbbc884ec898"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:07.558578 master-0 kubenswrapper[33013]: I0313 11:13:07.558548 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-config" (OuterVolumeSpecName: "config") pod "07c1fb4b-6e38-4be9-aeac-dbbc884ec898" (UID: "07c1fb4b-6e38-4be9-aeac-dbbc884ec898"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:07.581037 master-0 kubenswrapper[33013]: I0313 11:13:07.580975 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:07.581037 master-0 kubenswrapper[33013]: I0313 11:13:07.581023 33013 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:07.581037 master-0 kubenswrapper[33013]: I0313 11:13:07.581040 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:07.581694 master-0 kubenswrapper[33013]: I0313 11:13:07.581052 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:07.581694 master-0 kubenswrapper[33013]: I0313 11:13:07.581068 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8vw5\" (UniqueName: \"kubernetes.io/projected/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-kube-api-access-h8vw5\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:07.581694 master-0 kubenswrapper[33013]: I0313 11:13:07.581079 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c1fb4b-6e38-4be9-aeac-dbbc884ec898-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:08.155075 master-0 kubenswrapper[33013]: I0313 11:13:08.154890 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" event={"ID":"07c1fb4b-6e38-4be9-aeac-dbbc884ec898","Type":"ContainerDied","Data":"8a5553b3d2b425fc0ee1994da71203d2740d3812e07cccb7c917edf418798449"} Mar 13 11:13:08.155075 master-0 kubenswrapper[33013]: I0313 11:13:08.154974 33013 scope.go:117] "RemoveContainer" containerID="c5680862838591842365c07d4062114c88e8de9c22d1d02155379918bb336b93" Mar 13 11:13:08.155075 master-0 kubenswrapper[33013]: I0313 11:13:08.154923 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bd79cd5f-hfrhd" Mar 13 11:13:08.158687 master-0 kubenswrapper[33013]: I0313 11:13:08.158629 33013 generic.go:334] "Generic (PLEG): container finished" podID="d7be7f77-638d-446e-b9a4-13195f124ca0" containerID="202ff16ce6cc503330a4aa39c9d938d3c4e72a43d474a9fa2922a928c2fc455e" exitCode=0 Mar 13 11:13:08.158687 master-0 kubenswrapper[33013]: I0313 11:13:08.158678 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6zpjq" event={"ID":"d7be7f77-638d-446e-b9a4-13195f124ca0","Type":"ContainerDied","Data":"202ff16ce6cc503330a4aa39c9d938d3c4e72a43d474a9fa2922a928c2fc455e"} Mar 13 11:13:08.182848 master-0 kubenswrapper[33013]: I0313 11:13:08.182689 33013 scope.go:117] "RemoveContainer" containerID="062b44d3234fbefd2d0811bb5ab3a7f77d252646d2e6f95408c389dbcc5e027d" Mar 13 11:13:08.286618 master-0 kubenswrapper[33013]: I0313 11:13:08.286549 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68c5dd5fdf-lff8l"] Mar 13 11:13:09.079745 master-0 kubenswrapper[33013]: I0313 11:13:09.079684 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bd79cd5f-hfrhd"] Mar 13 11:13:09.156238 master-0 kubenswrapper[33013]: I0313 11:13:09.155783 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bd79cd5f-hfrhd"] Mar 13 11:13:09.173368 master-0 kubenswrapper[33013]: I0313 11:13:09.172702 33013 generic.go:334] "Generic (PLEG): container finished" podID="786d0394-1427-40dd-a9c8-231d5bc3dde3" containerID="8afbc27bd04f92cf0271394d94f167bd7df1504d75c1b9e9b99352e5c9f04373" exitCode=0 Mar 13 11:13:09.173368 master-0 kubenswrapper[33013]: I0313 11:13:09.172781 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" event={"ID":"786d0394-1427-40dd-a9c8-231d5bc3dde3","Type":"ContainerDied","Data":"8afbc27bd04f92cf0271394d94f167bd7df1504d75c1b9e9b99352e5c9f04373"} Mar 13 11:13:09.173368 master-0 kubenswrapper[33013]: I0313 11:13:09.172811 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" event={"ID":"786d0394-1427-40dd-a9c8-231d5bc3dde3","Type":"ContainerStarted","Data":"0842b46d22e0fc48de39f25e89d561307f71d18dac139ec5dc274482c5bb39bf"} Mar 13 11:13:09.666253 master-0 kubenswrapper[33013]: I0313 11:13:09.665289 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6zpjq" Mar 13 11:13:09.746118 master-0 kubenswrapper[33013]: I0313 11:13:09.745973 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvht2\" (UniqueName: \"kubernetes.io/projected/d7be7f77-638d-446e-b9a4-13195f124ca0-kube-api-access-tvht2\") pod \"d7be7f77-638d-446e-b9a4-13195f124ca0\" (UID: \"d7be7f77-638d-446e-b9a4-13195f124ca0\") " Mar 13 11:13:09.746321 master-0 kubenswrapper[33013]: I0313 11:13:09.746193 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7be7f77-638d-446e-b9a4-13195f124ca0-combined-ca-bundle\") pod \"d7be7f77-638d-446e-b9a4-13195f124ca0\" (UID: \"d7be7f77-638d-446e-b9a4-13195f124ca0\") " Mar 13 11:13:09.746406 master-0 kubenswrapper[33013]: I0313 11:13:09.746372 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7be7f77-638d-446e-b9a4-13195f124ca0-config-data\") pod \"d7be7f77-638d-446e-b9a4-13195f124ca0\" (UID: \"d7be7f77-638d-446e-b9a4-13195f124ca0\") " Mar 13 11:13:09.750006 master-0 kubenswrapper[33013]: I0313 11:13:09.749951 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7be7f77-638d-446e-b9a4-13195f124ca0-kube-api-access-tvht2" (OuterVolumeSpecName: "kube-api-access-tvht2") pod "d7be7f77-638d-446e-b9a4-13195f124ca0" (UID: "d7be7f77-638d-446e-b9a4-13195f124ca0"). InnerVolumeSpecName "kube-api-access-tvht2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:09.776553 master-0 kubenswrapper[33013]: I0313 11:13:09.776469 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7be7f77-638d-446e-b9a4-13195f124ca0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7be7f77-638d-446e-b9a4-13195f124ca0" (UID: "d7be7f77-638d-446e-b9a4-13195f124ca0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:09.796980 master-0 kubenswrapper[33013]: I0313 11:13:09.796895 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7be7f77-638d-446e-b9a4-13195f124ca0-config-data" (OuterVolumeSpecName: "config-data") pod "d7be7f77-638d-446e-b9a4-13195f124ca0" (UID: "d7be7f77-638d-446e-b9a4-13195f124ca0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:09.849970 master-0 kubenswrapper[33013]: I0313 11:13:09.849888 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvht2\" (UniqueName: \"kubernetes.io/projected/d7be7f77-638d-446e-b9a4-13195f124ca0-kube-api-access-tvht2\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:09.849970 master-0 kubenswrapper[33013]: I0313 11:13:09.849942 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7be7f77-638d-446e-b9a4-13195f124ca0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:09.849970 master-0 kubenswrapper[33013]: I0313 11:13:09.849956 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7be7f77-638d-446e-b9a4-13195f124ca0-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:10.205615 master-0 kubenswrapper[33013]: I0313 11:13:10.202905 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" event={"ID":"786d0394-1427-40dd-a9c8-231d5bc3dde3","Type":"ContainerStarted","Data":"57e41afa6e85fd3eb4cc687b9c4837a1560d60e8d436b4af6ed87d204392fd44"} Mar 13 11:13:10.206298 master-0 kubenswrapper[33013]: I0313 11:13:10.205935 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:10.227614 master-0 kubenswrapper[33013]: I0313 11:13:10.221816 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6zpjq" event={"ID":"d7be7f77-638d-446e-b9a4-13195f124ca0","Type":"ContainerDied","Data":"a85721262e34e505f039ee0d27d2a385680f0f48675cf0ccbcd267892ea2fc83"} Mar 13 11:13:10.227614 master-0 kubenswrapper[33013]: I0313 11:13:10.221864 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a85721262e34e505f039ee0d27d2a385680f0f48675cf0ccbcd267892ea2fc83" Mar 13 11:13:10.227614 master-0 kubenswrapper[33013]: I0313 11:13:10.221927 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6zpjq" Mar 13 11:13:10.379618 master-0 kubenswrapper[33013]: I0313 11:13:10.368655 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" podStartSLOduration=4.368632316 podStartE2EDuration="4.368632316s" podCreationTimestamp="2026-03-13 11:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:10.275905501 +0000 UTC m=+973.751858850" watchObservedRunningTime="2026-03-13 11:13:10.368632316 +0000 UTC m=+973.844585665" Mar 13 11:13:10.379618 master-0 kubenswrapper[33013]: I0313 11:13:10.372788 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-phs4s"] Mar 13 11:13:10.379618 master-0 kubenswrapper[33013]: E0313 11:13:10.373802 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7be7f77-638d-446e-b9a4-13195f124ca0" containerName="keystone-db-sync" Mar 13 11:13:10.379618 master-0 kubenswrapper[33013]: I0313 11:13:10.373824 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7be7f77-638d-446e-b9a4-13195f124ca0" containerName="keystone-db-sync" Mar 13 11:13:10.379618 master-0 kubenswrapper[33013]: E0313 11:13:10.373910 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c1fb4b-6e38-4be9-aeac-dbbc884ec898" containerName="init" Mar 13 11:13:10.379618 master-0 kubenswrapper[33013]: I0313 11:13:10.373919 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c1fb4b-6e38-4be9-aeac-dbbc884ec898" containerName="init" Mar 13 11:13:10.379618 master-0 kubenswrapper[33013]: E0313 11:13:10.373937 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c1fb4b-6e38-4be9-aeac-dbbc884ec898" containerName="dnsmasq-dns" Mar 13 11:13:10.379618 master-0 kubenswrapper[33013]: I0313 11:13:10.373944 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c1fb4b-6e38-4be9-aeac-dbbc884ec898" containerName="dnsmasq-dns" Mar 13 11:13:10.379618 master-0 kubenswrapper[33013]: I0313 11:13:10.374144 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c1fb4b-6e38-4be9-aeac-dbbc884ec898" containerName="dnsmasq-dns" Mar 13 11:13:10.379618 master-0 kubenswrapper[33013]: I0313 11:13:10.374168 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7be7f77-638d-446e-b9a4-13195f124ca0" containerName="keystone-db-sync" Mar 13 11:13:10.379618 master-0 kubenswrapper[33013]: I0313 11:13:10.375387 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.380224 master-0 kubenswrapper[33013]: I0313 11:13:10.380125 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 11:13:10.381617 master-0 kubenswrapper[33013]: I0313 11:13:10.380361 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 13 11:13:10.381617 master-0 kubenswrapper[33013]: I0313 11:13:10.380422 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 11:13:10.381617 master-0 kubenswrapper[33013]: I0313 11:13:10.380687 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 11:13:10.404023 master-0 kubenswrapper[33013]: I0313 11:13:10.401701 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-phs4s"] Mar 13 11:13:10.450611 master-0 kubenswrapper[33013]: I0313 11:13:10.444656 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68c5dd5fdf-lff8l"] Mar 13 11:13:10.463629 master-0 kubenswrapper[33013]: I0313 11:13:10.459257 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-974fc7ff5-d7sq9"] Mar 13 11:13:10.463629 master-0 kubenswrapper[33013]: I0313 11:13:10.461285 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.497202 master-0 kubenswrapper[33013]: I0313 11:13:10.483627 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hnww\" (UniqueName: \"kubernetes.io/projected/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-kube-api-access-5hnww\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.497202 master-0 kubenswrapper[33013]: I0313 11:13:10.483693 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-combined-ca-bundle\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.497202 master-0 kubenswrapper[33013]: I0313 11:13:10.483770 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-fernet-keys\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.497202 master-0 kubenswrapper[33013]: I0313 11:13:10.483800 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-scripts\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.497202 master-0 kubenswrapper[33013]: I0313 11:13:10.483882 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-credential-keys\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.497202 master-0 kubenswrapper[33013]: I0313 11:13:10.483979 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-config-data\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.502756 master-0 kubenswrapper[33013]: I0313 11:13:10.500790 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-974fc7ff5-d7sq9"] Mar 13 11:13:10.586377 master-0 kubenswrapper[33013]: I0313 11:13:10.585807 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-dns-swift-storage-0\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.586377 master-0 kubenswrapper[33013]: I0313 11:13:10.585969 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-ovsdbserver-sb\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.586377 master-0 kubenswrapper[33013]: I0313 11:13:10.586057 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-credential-keys\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.586377 master-0 kubenswrapper[33013]: I0313 11:13:10.586155 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-ovsdbserver-nb\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.586377 master-0 kubenswrapper[33013]: I0313 11:13:10.586183 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-config\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.586377 master-0 kubenswrapper[33013]: I0313 11:13:10.586205 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2vg9\" (UniqueName: \"kubernetes.io/projected/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-kube-api-access-k2vg9\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.586377 master-0 kubenswrapper[33013]: I0313 11:13:10.586253 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-config-data\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.586377 master-0 kubenswrapper[33013]: I0313 11:13:10.586317 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hnww\" (UniqueName: \"kubernetes.io/projected/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-kube-api-access-5hnww\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.586377 master-0 kubenswrapper[33013]: I0313 11:13:10.586350 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-combined-ca-bundle\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.586377 master-0 kubenswrapper[33013]: I0313 11:13:10.586394 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-dns-svc\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.587075 master-0 kubenswrapper[33013]: I0313 11:13:10.586434 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-fernet-keys\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.587075 master-0 kubenswrapper[33013]: I0313 11:13:10.586456 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-scripts\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.592618 master-0 kubenswrapper[33013]: I0313 11:13:10.592541 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-credential-keys\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.605424 master-0 kubenswrapper[33013]: I0313 11:13:10.605358 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-fernet-keys\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.606203 master-0 kubenswrapper[33013]: I0313 11:13:10.606163 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-combined-ca-bundle\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.608749 master-0 kubenswrapper[33013]: I0313 11:13:10.608702 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-config-data\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.616728 master-0 kubenswrapper[33013]: I0313 11:13:10.614267 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4x5s9"] Mar 13 11:13:10.623704 master-0 kubenswrapper[33013]: I0313 11:13:10.618430 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-scripts\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.645655 master-0 kubenswrapper[33013]: I0313 11:13:10.626316 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4x5s9" Mar 13 11:13:10.645655 master-0 kubenswrapper[33013]: I0313 11:13:10.632049 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 13 11:13:10.645655 master-0 kubenswrapper[33013]: I0313 11:13:10.638124 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 13 11:13:10.668684 master-0 kubenswrapper[33013]: I0313 11:13:10.662809 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-w9jfw"] Mar 13 11:13:10.668684 master-0 kubenswrapper[33013]: I0313 11:13:10.663697 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hnww\" (UniqueName: \"kubernetes.io/projected/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-kube-api-access-5hnww\") pod \"keystone-bootstrap-phs4s\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.668684 master-0 kubenswrapper[33013]: I0313 11:13:10.664195 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-w9jfw" Mar 13 11:13:10.708328 master-0 kubenswrapper[33013]: I0313 11:13:10.704456 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-dns-swift-storage-0\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.708328 master-0 kubenswrapper[33013]: I0313 11:13:10.704543 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-ovsdbserver-sb\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.708328 master-0 kubenswrapper[33013]: I0313 11:13:10.704606 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/24aec7cb-081e-4a89-80bb-b11d4e085557-config\") pod \"neutron-db-sync-4x5s9\" (UID: \"24aec7cb-081e-4a89-80bb-b11d4e085557\") " pod="openstack/neutron-db-sync-4x5s9" Mar 13 11:13:10.708328 master-0 kubenswrapper[33013]: I0313 11:13:10.705463 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-dns-swift-storage-0\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.708328 master-0 kubenswrapper[33013]: I0313 11:13:10.705496 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kswhl\" (UniqueName: \"kubernetes.io/projected/24aec7cb-081e-4a89-80bb-b11d4e085557-kube-api-access-kswhl\") pod \"neutron-db-sync-4x5s9\" (UID: \"24aec7cb-081e-4a89-80bb-b11d4e085557\") " pod="openstack/neutron-db-sync-4x5s9" Mar 13 11:13:10.708328 master-0 kubenswrapper[33013]: I0313 11:13:10.706806 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:10.708328 master-0 kubenswrapper[33013]: I0313 11:13:10.707676 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-ovsdbserver-sb\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.708328 master-0 kubenswrapper[33013]: I0313 11:13:10.708065 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-ovsdbserver-nb\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.708328 master-0 kubenswrapper[33013]: I0313 11:13:10.708104 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-config\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.708328 master-0 kubenswrapper[33013]: I0313 11:13:10.708126 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2vg9\" (UniqueName: \"kubernetes.io/projected/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-kube-api-access-k2vg9\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.708328 master-0 kubenswrapper[33013]: I0313 11:13:10.708160 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24aec7cb-081e-4a89-80bb-b11d4e085557-combined-ca-bundle\") pod \"neutron-db-sync-4x5s9\" (UID: \"24aec7cb-081e-4a89-80bb-b11d4e085557\") " pod="openstack/neutron-db-sync-4x5s9" Mar 13 11:13:10.708328 master-0 kubenswrapper[33013]: I0313 11:13:10.708299 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-dns-svc\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.720509 master-0 kubenswrapper[33013]: I0313 11:13:10.709138 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-dns-svc\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.720509 master-0 kubenswrapper[33013]: I0313 11:13:10.709202 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-config\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.720509 master-0 kubenswrapper[33013]: I0313 11:13:10.709881 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-ovsdbserver-nb\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.742808 master-0 kubenswrapper[33013]: I0313 11:13:10.735500 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c1fb4b-6e38-4be9-aeac-dbbc884ec898" path="/var/lib/kubelet/pods/07c1fb4b-6e38-4be9-aeac-dbbc884ec898/volumes" Mar 13 11:13:10.746699 master-0 kubenswrapper[33013]: I0313 11:13:10.743147 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-w9jfw"] Mar 13 11:13:10.756787 master-0 kubenswrapper[33013]: I0313 11:13:10.756348 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2vg9\" (UniqueName: \"kubernetes.io/projected/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-kube-api-access-k2vg9\") pod \"dnsmasq-dns-974fc7ff5-d7sq9\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.816429 master-0 kubenswrapper[33013]: I0313 11:13:10.815575 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/24aec7cb-081e-4a89-80bb-b11d4e085557-config\") pod \"neutron-db-sync-4x5s9\" (UID: \"24aec7cb-081e-4a89-80bb-b11d4e085557\") " pod="openstack/neutron-db-sync-4x5s9" Mar 13 11:13:10.816429 master-0 kubenswrapper[33013]: I0313 11:13:10.815680 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kswhl\" (UniqueName: \"kubernetes.io/projected/24aec7cb-081e-4a89-80bb-b11d4e085557-kube-api-access-kswhl\") pod \"neutron-db-sync-4x5s9\" (UID: \"24aec7cb-081e-4a89-80bb-b11d4e085557\") " pod="openstack/neutron-db-sync-4x5s9" Mar 13 11:13:10.816429 master-0 kubenswrapper[33013]: I0313 11:13:10.815757 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24aec7cb-081e-4a89-80bb-b11d4e085557-combined-ca-bundle\") pod \"neutron-db-sync-4x5s9\" (UID: \"24aec7cb-081e-4a89-80bb-b11d4e085557\") " pod="openstack/neutron-db-sync-4x5s9" Mar 13 11:13:10.816429 master-0 kubenswrapper[33013]: I0313 11:13:10.815807 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b901e4-e1c4-41bf-8083-31d19c301c44-operator-scripts\") pod \"ironic-db-create-w9jfw\" (UID: \"65b901e4-e1c4-41bf-8083-31d19c301c44\") " pod="openstack/ironic-db-create-w9jfw" Mar 13 11:13:10.816429 master-0 kubenswrapper[33013]: I0313 11:13:10.815910 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf7zg\" (UniqueName: \"kubernetes.io/projected/65b901e4-e1c4-41bf-8083-31d19c301c44-kube-api-access-jf7zg\") pod \"ironic-db-create-w9jfw\" (UID: \"65b901e4-e1c4-41bf-8083-31d19c301c44\") " pod="openstack/ironic-db-create-w9jfw" Mar 13 11:13:10.830333 master-0 kubenswrapper[33013]: I0313 11:13:10.830024 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/24aec7cb-081e-4a89-80bb-b11d4e085557-config\") pod \"neutron-db-sync-4x5s9\" (UID: \"24aec7cb-081e-4a89-80bb-b11d4e085557\") " pod="openstack/neutron-db-sync-4x5s9" Mar 13 11:13:10.831769 master-0 kubenswrapper[33013]: I0313 11:13:10.831039 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24aec7cb-081e-4a89-80bb-b11d4e085557-combined-ca-bundle\") pod \"neutron-db-sync-4x5s9\" (UID: \"24aec7cb-081e-4a89-80bb-b11d4e085557\") " pod="openstack/neutron-db-sync-4x5s9" Mar 13 11:13:10.851401 master-0 kubenswrapper[33013]: I0313 11:13:10.850662 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4x5s9"] Mar 13 11:13:10.862974 master-0 kubenswrapper[33013]: I0313 11:13:10.862068 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kswhl\" (UniqueName: \"kubernetes.io/projected/24aec7cb-081e-4a89-80bb-b11d4e085557-kube-api-access-kswhl\") pod \"neutron-db-sync-4x5s9\" (UID: \"24aec7cb-081e-4a89-80bb-b11d4e085557\") " pod="openstack/neutron-db-sync-4x5s9" Mar 13 11:13:10.870255 master-0 kubenswrapper[33013]: I0313 11:13:10.870203 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:10.952305 master-0 kubenswrapper[33013]: I0313 11:13:10.918162 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b901e4-e1c4-41bf-8083-31d19c301c44-operator-scripts\") pod \"ironic-db-create-w9jfw\" (UID: \"65b901e4-e1c4-41bf-8083-31d19c301c44\") " pod="openstack/ironic-db-create-w9jfw" Mar 13 11:13:10.952305 master-0 kubenswrapper[33013]: I0313 11:13:10.918888 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf7zg\" (UniqueName: \"kubernetes.io/projected/65b901e4-e1c4-41bf-8083-31d19c301c44-kube-api-access-jf7zg\") pod \"ironic-db-create-w9jfw\" (UID: \"65b901e4-e1c4-41bf-8083-31d19c301c44\") " pod="openstack/ironic-db-create-w9jfw" Mar 13 11:13:10.952305 master-0 kubenswrapper[33013]: I0313 11:13:10.920345 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b901e4-e1c4-41bf-8083-31d19c301c44-operator-scripts\") pod \"ironic-db-create-w9jfw\" (UID: \"65b901e4-e1c4-41bf-8083-31d19c301c44\") " pod="openstack/ironic-db-create-w9jfw" Mar 13 11:13:10.952305 master-0 kubenswrapper[33013]: I0313 11:13:10.924653 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-17f4-account-create-update-pgqvs"] Mar 13 11:13:10.952305 master-0 kubenswrapper[33013]: I0313 11:13:10.926161 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-17f4-account-create-update-pgqvs" Mar 13 11:13:10.952305 master-0 kubenswrapper[33013]: I0313 11:13:10.928758 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Mar 13 11:13:10.966755 master-0 kubenswrapper[33013]: I0313 11:13:10.956399 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-17f4-account-create-update-pgqvs"] Mar 13 11:13:11.022530 master-0 kubenswrapper[33013]: I0313 11:13:11.006777 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf7zg\" (UniqueName: \"kubernetes.io/projected/65b901e4-e1c4-41bf-8083-31d19c301c44-kube-api-access-jf7zg\") pod \"ironic-db-create-w9jfw\" (UID: \"65b901e4-e1c4-41bf-8083-31d19c301c44\") " pod="openstack/ironic-db-create-w9jfw" Mar 13 11:13:11.050379 master-0 kubenswrapper[33013]: I0313 11:13:11.047512 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt55k\" (UniqueName: \"kubernetes.io/projected/63be249f-23c2-4c9a-a6f3-3f9355da4f66-kube-api-access-nt55k\") pod \"ironic-17f4-account-create-update-pgqvs\" (UID: \"63be249f-23c2-4c9a-a6f3-3f9355da4f66\") " pod="openstack/ironic-17f4-account-create-update-pgqvs" Mar 13 11:13:11.050379 master-0 kubenswrapper[33013]: I0313 11:13:11.047740 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63be249f-23c2-4c9a-a6f3-3f9355da4f66-operator-scripts\") pod \"ironic-17f4-account-create-update-pgqvs\" (UID: \"63be249f-23c2-4c9a-a6f3-3f9355da4f66\") " pod="openstack/ironic-17f4-account-create-update-pgqvs" Mar 13 11:13:11.148465 master-0 kubenswrapper[33013]: I0313 11:13:11.147245 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4x5s9" Mar 13 11:13:11.149206 master-0 kubenswrapper[33013]: I0313 11:13:11.149168 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63be249f-23c2-4c9a-a6f3-3f9355da4f66-operator-scripts\") pod \"ironic-17f4-account-create-update-pgqvs\" (UID: \"63be249f-23c2-4c9a-a6f3-3f9355da4f66\") " pod="openstack/ironic-17f4-account-create-update-pgqvs" Mar 13 11:13:11.149266 master-0 kubenswrapper[33013]: I0313 11:13:11.149248 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt55k\" (UniqueName: \"kubernetes.io/projected/63be249f-23c2-4c9a-a6f3-3f9355da4f66-kube-api-access-nt55k\") pod \"ironic-17f4-account-create-update-pgqvs\" (UID: \"63be249f-23c2-4c9a-a6f3-3f9355da4f66\") " pod="openstack/ironic-17f4-account-create-update-pgqvs" Mar 13 11:13:11.150344 master-0 kubenswrapper[33013]: I0313 11:13:11.150312 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63be249f-23c2-4c9a-a6f3-3f9355da4f66-operator-scripts\") pod \"ironic-17f4-account-create-update-pgqvs\" (UID: \"63be249f-23c2-4c9a-a6f3-3f9355da4f66\") " pod="openstack/ironic-17f4-account-create-update-pgqvs" Mar 13 11:13:11.157918 master-0 kubenswrapper[33013]: I0313 11:13:11.157847 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ceac4-db-sync-trrwb"] Mar 13 11:13:11.160539 master-0 kubenswrapper[33013]: I0313 11:13:11.160413 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.165743 master-0 kubenswrapper[33013]: I0313 11:13:11.165703 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ceac4-scripts" Mar 13 11:13:11.167011 master-0 kubenswrapper[33013]: I0313 11:13:11.166191 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ceac4-config-data" Mar 13 11:13:11.174166 master-0 kubenswrapper[33013]: I0313 11:13:11.173370 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-w9jfw" Mar 13 11:13:11.185026 master-0 kubenswrapper[33013]: I0313 11:13:11.182197 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt55k\" (UniqueName: \"kubernetes.io/projected/63be249f-23c2-4c9a-a6f3-3f9355da4f66-kube-api-access-nt55k\") pod \"ironic-17f4-account-create-update-pgqvs\" (UID: \"63be249f-23c2-4c9a-a6f3-3f9355da4f66\") " pod="openstack/ironic-17f4-account-create-update-pgqvs" Mar 13 11:13:11.201031 master-0 kubenswrapper[33013]: I0313 11:13:11.200299 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-db-sync-trrwb"] Mar 13 11:13:11.240954 master-0 kubenswrapper[33013]: I0313 11:13:11.239955 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2j64b"] Mar 13 11:13:11.241700 master-0 kubenswrapper[33013]: I0313 11:13:11.241648 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.247006 master-0 kubenswrapper[33013]: I0313 11:13:11.246959 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 13 11:13:11.247222 master-0 kubenswrapper[33013]: I0313 11:13:11.247197 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 13 11:13:11.275143 master-0 kubenswrapper[33013]: I0313 11:13:11.275086 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-974fc7ff5-d7sq9"] Mar 13 11:13:11.295363 master-0 kubenswrapper[33013]: I0313 11:13:11.295315 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2j64b"] Mar 13 11:13:11.388058 master-0 kubenswrapper[33013]: I0313 11:13:11.380440 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-db-sync-config-data\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.388058 master-0 kubenswrapper[33013]: I0313 11:13:11.380493 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-config-data\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.388058 master-0 kubenswrapper[33013]: I0313 11:13:11.380535 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-scripts\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.388058 master-0 kubenswrapper[33013]: I0313 11:13:11.380576 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9mr8\" (UniqueName: \"kubernetes.io/projected/40e31a77-1481-4eb8-a192-604aad9eaaf8-kube-api-access-q9mr8\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.388058 master-0 kubenswrapper[33013]: I0313 11:13:11.380616 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-scripts\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.388058 master-0 kubenswrapper[33013]: I0313 11:13:11.380649 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgxrl\" (UniqueName: \"kubernetes.io/projected/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-kube-api-access-rgxrl\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.388058 master-0 kubenswrapper[33013]: I0313 11:13:11.380709 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40e31a77-1481-4eb8-a192-604aad9eaaf8-etc-machine-id\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.388058 master-0 kubenswrapper[33013]: I0313 11:13:11.380780 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-logs\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.388058 master-0 kubenswrapper[33013]: I0313 11:13:11.380822 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-combined-ca-bundle\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.388058 master-0 kubenswrapper[33013]: I0313 11:13:11.380846 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-combined-ca-bundle\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.388058 master-0 kubenswrapper[33013]: I0313 11:13:11.380868 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-config-data\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.388058 master-0 kubenswrapper[33013]: I0313 11:13:11.381308 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-17f4-account-create-update-pgqvs" Mar 13 11:13:11.425785 master-0 kubenswrapper[33013]: I0313 11:13:11.423857 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc5fdb9b9-vmzb7"] Mar 13 11:13:11.426017 master-0 kubenswrapper[33013]: I0313 11:13:11.425900 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.451799 master-0 kubenswrapper[33013]: I0313 11:13:11.450688 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc5fdb9b9-vmzb7"] Mar 13 11:13:11.487420 master-0 kubenswrapper[33013]: I0313 11:13:11.482340 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-logs\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.488479 master-0 kubenswrapper[33013]: I0313 11:13:11.488445 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-combined-ca-bundle\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.488639 master-0 kubenswrapper[33013]: I0313 11:13:11.488620 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-combined-ca-bundle\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.488741 master-0 kubenswrapper[33013]: I0313 11:13:11.483240 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-logs\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.488826 master-0 kubenswrapper[33013]: I0313 11:13:11.488797 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-config-data\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.488924 master-0 kubenswrapper[33013]: I0313 11:13:11.488911 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-db-sync-config-data\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.489011 master-0 kubenswrapper[33013]: I0313 11:13:11.489000 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-config-data\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.489140 master-0 kubenswrapper[33013]: I0313 11:13:11.489126 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-scripts\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.489261 master-0 kubenswrapper[33013]: I0313 11:13:11.489242 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9mr8\" (UniqueName: \"kubernetes.io/projected/40e31a77-1481-4eb8-a192-604aad9eaaf8-kube-api-access-q9mr8\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.489366 master-0 kubenswrapper[33013]: I0313 11:13:11.489352 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-scripts\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.489488 master-0 kubenswrapper[33013]: I0313 11:13:11.489475 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgxrl\" (UniqueName: \"kubernetes.io/projected/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-kube-api-access-rgxrl\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.489725 master-0 kubenswrapper[33013]: I0313 11:13:11.489710 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40e31a77-1481-4eb8-a192-604aad9eaaf8-etc-machine-id\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.490529 master-0 kubenswrapper[33013]: I0313 11:13:11.490513 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40e31a77-1481-4eb8-a192-604aad9eaaf8-etc-machine-id\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.500622 master-0 kubenswrapper[33013]: I0313 11:13:11.498685 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-db-sync-config-data\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.504267 master-0 kubenswrapper[33013]: I0313 11:13:11.503341 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-combined-ca-bundle\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.504267 master-0 kubenswrapper[33013]: I0313 11:13:11.504069 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-scripts\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.504267 master-0 kubenswrapper[33013]: I0313 11:13:11.504256 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-scripts\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.504399 master-0 kubenswrapper[33013]: I0313 11:13:11.504316 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-combined-ca-bundle\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.511278 master-0 kubenswrapper[33013]: I0313 11:13:11.505153 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-config-data\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.511278 master-0 kubenswrapper[33013]: I0313 11:13:11.507119 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-config-data\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.555302 master-0 kubenswrapper[33013]: I0313 11:13:11.555219 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-phs4s"] Mar 13 11:13:11.555750 master-0 kubenswrapper[33013]: I0313 11:13:11.555710 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9mr8\" (UniqueName: \"kubernetes.io/projected/40e31a77-1481-4eb8-a192-604aad9eaaf8-kube-api-access-q9mr8\") pod \"cinder-ceac4-db-sync-trrwb\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.559678 master-0 kubenswrapper[33013]: I0313 11:13:11.557017 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgxrl\" (UniqueName: \"kubernetes.io/projected/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-kube-api-access-rgxrl\") pod \"placement-db-sync-2j64b\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.573378 master-0 kubenswrapper[33013]: I0313 11:13:11.573317 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:11.595895 master-0 kubenswrapper[33013]: I0313 11:13:11.595111 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-ovsdbserver-sb\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.595895 master-0 kubenswrapper[33013]: I0313 11:13:11.595222 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-dns-swift-storage-0\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.595895 master-0 kubenswrapper[33013]: I0313 11:13:11.595297 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-ovsdbserver-nb\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.595895 master-0 kubenswrapper[33013]: I0313 11:13:11.595368 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-config\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.595895 master-0 kubenswrapper[33013]: I0313 11:13:11.595456 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdz6r\" (UniqueName: \"kubernetes.io/projected/e5137299-9cd3-46e0-9689-4416b06029db-kube-api-access-fdz6r\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.595895 master-0 kubenswrapper[33013]: I0313 11:13:11.595566 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-dns-svc\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.621624 master-0 kubenswrapper[33013]: I0313 11:13:11.613089 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:11.698717 master-0 kubenswrapper[33013]: I0313 11:13:11.697461 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-ovsdbserver-sb\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.698717 master-0 kubenswrapper[33013]: I0313 11:13:11.697542 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-dns-swift-storage-0\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.698717 master-0 kubenswrapper[33013]: I0313 11:13:11.697636 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-ovsdbserver-nb\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.698717 master-0 kubenswrapper[33013]: I0313 11:13:11.697702 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-config\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.698717 master-0 kubenswrapper[33013]: I0313 11:13:11.697762 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdz6r\" (UniqueName: \"kubernetes.io/projected/e5137299-9cd3-46e0-9689-4416b06029db-kube-api-access-fdz6r\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.698717 master-0 kubenswrapper[33013]: I0313 11:13:11.697850 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-dns-svc\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.699071 master-0 kubenswrapper[33013]: I0313 11:13:11.698825 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-dns-svc\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.699381 master-0 kubenswrapper[33013]: I0313 11:13:11.699346 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-ovsdbserver-sb\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.700671 master-0 kubenswrapper[33013]: I0313 11:13:11.700639 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-dns-swift-storage-0\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.701560 master-0 kubenswrapper[33013]: I0313 11:13:11.701524 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-ovsdbserver-nb\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.701719 master-0 kubenswrapper[33013]: I0313 11:13:11.701553 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-974fc7ff5-d7sq9"] Mar 13 11:13:11.702270 master-0 kubenswrapper[33013]: I0313 11:13:11.702242 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-config\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.760345 master-0 kubenswrapper[33013]: I0313 11:13:11.760269 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdz6r\" (UniqueName: \"kubernetes.io/projected/e5137299-9cd3-46e0-9689-4416b06029db-kube-api-access-fdz6r\") pod \"dnsmasq-dns-dc5fdb9b9-vmzb7\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.781001 master-0 kubenswrapper[33013]: I0313 11:13:11.780967 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:11.973564 master-0 kubenswrapper[33013]: I0313 11:13:11.972829 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-w9jfw"] Mar 13 11:13:11.998001 master-0 kubenswrapper[33013]: I0313 11:13:11.988567 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4x5s9"] Mar 13 11:13:12.372081 master-0 kubenswrapper[33013]: I0313 11:13:12.356978 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-17f4-account-create-update-pgqvs"] Mar 13 11:13:12.372964 master-0 kubenswrapper[33013]: I0313 11:13:12.372484 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-phs4s" event={"ID":"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5","Type":"ContainerStarted","Data":"b31c021ca0ad71c5cbd5655b2a563b3647021150402cf3e523799684f7cd9c4f"} Mar 13 11:13:12.372964 master-0 kubenswrapper[33013]: I0313 11:13:12.372542 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-phs4s" event={"ID":"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5","Type":"ContainerStarted","Data":"dfad83118483ce15e2f7a4762b9eaddbfada81956b988e396773f599a3d7d2f0"} Mar 13 11:13:12.382611 master-0 kubenswrapper[33013]: I0313 11:13:12.380880 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4x5s9" event={"ID":"24aec7cb-081e-4a89-80bb-b11d4e085557","Type":"ContainerStarted","Data":"355180f1a9cbd89b40cc9fcd41b62ffbcdad881774ab5c331d89cca441cdc526"} Mar 13 11:13:12.399918 master-0 kubenswrapper[33013]: I0313 11:13:12.394430 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-db-sync-trrwb"] Mar 13 11:13:12.418613 master-0 kubenswrapper[33013]: I0313 11:13:12.416665 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-w9jfw" event={"ID":"65b901e4-e1c4-41bf-8083-31d19c301c44","Type":"ContainerStarted","Data":"76fa770ee9e83f4818227faeedd970e382c1b8850b1d43e171ca7396040a3174"} Mar 13 11:13:12.431424 master-0 kubenswrapper[33013]: I0313 11:13:12.420322 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" podUID="786d0394-1427-40dd-a9c8-231d5bc3dde3" containerName="dnsmasq-dns" containerID="cri-o://57e41afa6e85fd3eb4cc687b9c4837a1560d60e8d436b4af6ed87d204392fd44" gracePeriod=10 Mar 13 11:13:12.431424 master-0 kubenswrapper[33013]: I0313 11:13:12.420858 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" event={"ID":"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b","Type":"ContainerStarted","Data":"b60ce003b0aba736bc427368c7e0d8f70b62b738ad3c60b0488aa87ffecdbd97"} Mar 13 11:13:12.431424 master-0 kubenswrapper[33013]: W0313 11:13:12.425066 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63be249f_23c2_4c9a_a6f3_3f9355da4f66.slice/crio-92568b129e8c80b6906eec3765e15dcc01b694cb134f6b37a464fbb7db325733 WatchSource:0}: Error finding container 92568b129e8c80b6906eec3765e15dcc01b694cb134f6b37a464fbb7db325733: Status 404 returned error can't find the container with id 92568b129e8c80b6906eec3765e15dcc01b694cb134f6b37a464fbb7db325733 Mar 13 11:13:12.431424 master-0 kubenswrapper[33013]: I0313 11:13:12.426099 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-phs4s" podStartSLOduration=2.426081206 podStartE2EDuration="2.426081206s" podCreationTimestamp="2026-03-13 11:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:12.41629934 +0000 UTC m=+975.892252689" watchObservedRunningTime="2026-03-13 11:13:12.426081206 +0000 UTC m=+975.902034555" Mar 13 11:13:12.493486 master-0 kubenswrapper[33013]: I0313 11:13:12.483224 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:13:12.493486 master-0 kubenswrapper[33013]: I0313 11:13:12.485639 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.493486 master-0 kubenswrapper[33013]: I0313 11:13:12.493427 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 13 11:13:12.494689 master-0 kubenswrapper[33013]: I0313 11:13:12.493665 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-87aa4-default-external-config-data" Mar 13 11:13:12.497579 master-0 kubenswrapper[33013]: I0313 11:13:12.497043 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:13:12.511120 master-0 kubenswrapper[33013]: W0313 11:13:12.509716 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40e31a77_1481_4eb8_a192_604aad9eaaf8.slice/crio-eaaf2bc7594cc40f31eadd66e2f3a2c42e747f957a27fb27ffab0649c9d698d9 WatchSource:0}: Error finding container eaaf2bc7594cc40f31eadd66e2f3a2c42e747f957a27fb27ffab0649c9d698d9: Status 404 returned error can't find the container with id eaaf2bc7594cc40f31eadd66e2f3a2c42e747f957a27fb27ffab0649c9d698d9 Mar 13 11:13:12.619991 master-0 kubenswrapper[33013]: I0313 11:13:12.619940 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2j64b"] Mar 13 11:13:12.658862 master-0 kubenswrapper[33013]: I0313 11:13:12.658737 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.658862 master-0 kubenswrapper[33013]: I0313 11:13:12.658854 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-scripts\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.660189 master-0 kubenswrapper[33013]: I0313 11:13:12.658929 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-config-data\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.660189 master-0 kubenswrapper[33013]: I0313 11:13:12.659139 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-combined-ca-bundle\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.660189 master-0 kubenswrapper[33013]: I0313 11:13:12.659298 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjhj6\" (UniqueName: \"kubernetes.io/projected/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-kube-api-access-xjhj6\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.660189 master-0 kubenswrapper[33013]: I0313 11:13:12.659340 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-logs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.660189 master-0 kubenswrapper[33013]: I0313 11:13:12.659398 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-httpd-run\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.802532 master-0 kubenswrapper[33013]: I0313 11:13:12.802318 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjhj6\" (UniqueName: \"kubernetes.io/projected/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-kube-api-access-xjhj6\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.806913 master-0 kubenswrapper[33013]: I0313 11:13:12.806851 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-logs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.813578 master-0 kubenswrapper[33013]: I0313 11:13:12.813505 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-httpd-run\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.814576 master-0 kubenswrapper[33013]: I0313 11:13:12.814543 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.814968 master-0 kubenswrapper[33013]: I0313 11:13:12.814838 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-scripts\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.815144 master-0 kubenswrapper[33013]: I0313 11:13:12.815123 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-config-data\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.815851 master-0 kubenswrapper[33013]: I0313 11:13:12.815804 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-combined-ca-bundle\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.834885 master-0 kubenswrapper[33013]: I0313 11:13:12.817845 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-httpd-run\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.834885 master-0 kubenswrapper[33013]: I0313 11:13:12.818131 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-logs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.834885 master-0 kubenswrapper[33013]: I0313 11:13:12.830320 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:13:12.834885 master-0 kubenswrapper[33013]: I0313 11:13:12.830376 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/02d92e594b7cf20d10752edde97d9397ac0766c013b947c8de1147a201f75769/globalmount\"" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.840751 master-0 kubenswrapper[33013]: I0313 11:13:12.838410 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc5fdb9b9-vmzb7"] Mar 13 11:13:12.949211 master-0 kubenswrapper[33013]: I0313 11:13:12.949143 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-scripts\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.949814 master-0 kubenswrapper[33013]: I0313 11:13:12.949766 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-combined-ca-bundle\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.953921 master-0 kubenswrapper[33013]: I0313 11:13:12.953872 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-config-data\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:12.957195 master-0 kubenswrapper[33013]: I0313 11:13:12.956987 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjhj6\" (UniqueName: \"kubernetes.io/projected/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-kube-api-access-xjhj6\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:13.440620 master-0 kubenswrapper[33013]: I0313 11:13:13.439540 33013 generic.go:334] "Generic (PLEG): container finished" podID="786d0394-1427-40dd-a9c8-231d5bc3dde3" containerID="57e41afa6e85fd3eb4cc687b9c4837a1560d60e8d436b4af6ed87d204392fd44" exitCode=0 Mar 13 11:13:13.440620 master-0 kubenswrapper[33013]: I0313 11:13:13.439738 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" event={"ID":"786d0394-1427-40dd-a9c8-231d5bc3dde3","Type":"ContainerDied","Data":"57e41afa6e85fd3eb4cc687b9c4837a1560d60e8d436b4af6ed87d204392fd44"} Mar 13 11:13:13.440620 master-0 kubenswrapper[33013]: I0313 11:13:13.439784 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" event={"ID":"786d0394-1427-40dd-a9c8-231d5bc3dde3","Type":"ContainerDied","Data":"0842b46d22e0fc48de39f25e89d561307f71d18dac139ec5dc274482c5bb39bf"} Mar 13 11:13:13.440620 master-0 kubenswrapper[33013]: I0313 11:13:13.439798 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0842b46d22e0fc48de39f25e89d561307f71d18dac139ec5dc274482c5bb39bf" Mar 13 11:13:13.447733 master-0 kubenswrapper[33013]: I0313 11:13:13.447350 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-w9jfw" event={"ID":"65b901e4-e1c4-41bf-8083-31d19c301c44","Type":"ContainerStarted","Data":"ac2ab892080c57772f3ee958aad89a10dbe5286091b115cabeec1c4fb79e6710"} Mar 13 11:13:13.454832 master-0 kubenswrapper[33013]: I0313 11:13:13.454575 33013 generic.go:334] "Generic (PLEG): container finished" podID="1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b" containerID="9ada0fc621fc01c3de8cd6385e3cb714c59095ea74a0a335231d7460dde4f43e" exitCode=0 Mar 13 11:13:13.455246 master-0 kubenswrapper[33013]: I0313 11:13:13.455195 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" event={"ID":"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b","Type":"ContainerDied","Data":"9ada0fc621fc01c3de8cd6385e3cb714c59095ea74a0a335231d7460dde4f43e"} Mar 13 11:13:13.458142 master-0 kubenswrapper[33013]: I0313 11:13:13.458081 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" event={"ID":"e5137299-9cd3-46e0-9689-4416b06029db","Type":"ContainerStarted","Data":"16c9a98d827fcc239efb3dd3a2c7cefdeb444a0b955c68042909628fb499a2d4"} Mar 13 11:13:13.463176 master-0 kubenswrapper[33013]: I0313 11:13:13.462319 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-17f4-account-create-update-pgqvs" event={"ID":"63be249f-23c2-4c9a-a6f3-3f9355da4f66","Type":"ContainerStarted","Data":"cc9e74126f6e28c2405e63503ed9bf16a1aedf9db3a5871620e6246703f63ccd"} Mar 13 11:13:13.463176 master-0 kubenswrapper[33013]: I0313 11:13:13.462375 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-17f4-account-create-update-pgqvs" event={"ID":"63be249f-23c2-4c9a-a6f3-3f9355da4f66","Type":"ContainerStarted","Data":"92568b129e8c80b6906eec3765e15dcc01b694cb134f6b37a464fbb7db325733"} Mar 13 11:13:13.465949 master-0 kubenswrapper[33013]: I0313 11:13:13.465823 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2j64b" event={"ID":"240bf9bb-a4f9-4b00-9f3b-da8db52d618a","Type":"ContainerStarted","Data":"bd6db7b879612222695369aed4f4abc9b2885c1070bfaf86b1e36b51d7b3bdef"} Mar 13 11:13:13.481998 master-0 kubenswrapper[33013]: I0313 11:13:13.481657 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-create-w9jfw" podStartSLOduration=3.481633663 podStartE2EDuration="3.481633663s" podCreationTimestamp="2026-03-13 11:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:13.47018025 +0000 UTC m=+976.946133599" watchObservedRunningTime="2026-03-13 11:13:13.481633663 +0000 UTC m=+976.957587012" Mar 13 11:13:13.482252 master-0 kubenswrapper[33013]: I0313 11:13:13.481980 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4x5s9" event={"ID":"24aec7cb-081e-4a89-80bb-b11d4e085557","Type":"ContainerStarted","Data":"61a0b5b5445d45b58f01dcc32d8fe030dd270a4b6d133b800bec37de73e0d424"} Mar 13 11:13:13.488926 master-0 kubenswrapper[33013]: I0313 11:13:13.488827 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-db-sync-trrwb" event={"ID":"40e31a77-1481-4eb8-a192-604aad9eaaf8","Type":"ContainerStarted","Data":"eaaf2bc7594cc40f31eadd66e2f3a2c42e747f957a27fb27ffab0649c9d698d9"} Mar 13 11:13:13.514822 master-0 kubenswrapper[33013]: I0313 11:13:13.512305 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-17f4-account-create-update-pgqvs" podStartSLOduration=3.512277537 podStartE2EDuration="3.512277537s" podCreationTimestamp="2026-03-13 11:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:13.501300737 +0000 UTC m=+976.977254086" watchObservedRunningTime="2026-03-13 11:13:13.512277537 +0000 UTC m=+976.988230906" Mar 13 11:13:13.532275 master-0 kubenswrapper[33013]: I0313 11:13:13.527620 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:13.662658 master-0 kubenswrapper[33013]: I0313 11:13:13.658800 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-dns-svc\") pod \"786d0394-1427-40dd-a9c8-231d5bc3dde3\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " Mar 13 11:13:13.662658 master-0 kubenswrapper[33013]: I0313 11:13:13.658902 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-ovsdbserver-sb\") pod \"786d0394-1427-40dd-a9c8-231d5bc3dde3\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " Mar 13 11:13:13.662658 master-0 kubenswrapper[33013]: I0313 11:13:13.659082 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-ovsdbserver-nb\") pod \"786d0394-1427-40dd-a9c8-231d5bc3dde3\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " Mar 13 11:13:13.662658 master-0 kubenswrapper[33013]: I0313 11:13:13.659235 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-dns-swift-storage-0\") pod \"786d0394-1427-40dd-a9c8-231d5bc3dde3\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " Mar 13 11:13:13.662658 master-0 kubenswrapper[33013]: I0313 11:13:13.659266 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-config\") pod \"786d0394-1427-40dd-a9c8-231d5bc3dde3\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " Mar 13 11:13:13.662658 master-0 kubenswrapper[33013]: I0313 11:13:13.659313 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrm2s\" (UniqueName: \"kubernetes.io/projected/786d0394-1427-40dd-a9c8-231d5bc3dde3-kube-api-access-wrm2s\") pod \"786d0394-1427-40dd-a9c8-231d5bc3dde3\" (UID: \"786d0394-1427-40dd-a9c8-231d5bc3dde3\") " Mar 13 11:13:13.701415 master-0 kubenswrapper[33013]: I0313 11:13:13.687334 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/786d0394-1427-40dd-a9c8-231d5bc3dde3-kube-api-access-wrm2s" (OuterVolumeSpecName: "kube-api-access-wrm2s") pod "786d0394-1427-40dd-a9c8-231d5bc3dde3" (UID: "786d0394-1427-40dd-a9c8-231d5bc3dde3"). InnerVolumeSpecName "kube-api-access-wrm2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:13.701415 master-0 kubenswrapper[33013]: I0313 11:13:13.694956 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:13:13.701415 master-0 kubenswrapper[33013]: E0313 11:13:13.695660 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786d0394-1427-40dd-a9c8-231d5bc3dde3" containerName="init" Mar 13 11:13:13.701415 master-0 kubenswrapper[33013]: I0313 11:13:13.695675 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="786d0394-1427-40dd-a9c8-231d5bc3dde3" containerName="init" Mar 13 11:13:13.701415 master-0 kubenswrapper[33013]: E0313 11:13:13.695690 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786d0394-1427-40dd-a9c8-231d5bc3dde3" containerName="dnsmasq-dns" Mar 13 11:13:13.701415 master-0 kubenswrapper[33013]: I0313 11:13:13.695697 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="786d0394-1427-40dd-a9c8-231d5bc3dde3" containerName="dnsmasq-dns" Mar 13 11:13:13.701415 master-0 kubenswrapper[33013]: I0313 11:13:13.695968 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="786d0394-1427-40dd-a9c8-231d5bc3dde3" containerName="dnsmasq-dns" Mar 13 11:13:13.701415 master-0 kubenswrapper[33013]: I0313 11:13:13.697103 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.701415 master-0 kubenswrapper[33013]: I0313 11:13:13.701340 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4x5s9" podStartSLOduration=3.699618182 podStartE2EDuration="3.699618182s" podCreationTimestamp="2026-03-13 11:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:13.66340324 +0000 UTC m=+977.139356589" watchObservedRunningTime="2026-03-13 11:13:13.699618182 +0000 UTC m=+977.175571531" Mar 13 11:13:13.703519 master-0 kubenswrapper[33013]: I0313 11:13:13.703093 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-87aa4-default-internal-config-data" Mar 13 11:13:13.786197 master-0 kubenswrapper[33013]: I0313 11:13:13.786086 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr9z5\" (UniqueName: \"kubernetes.io/projected/2e935de4-7311-4e53-8e37-fc54aac0c5df-kube-api-access-mr9z5\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.786197 master-0 kubenswrapper[33013]: I0313 11:13:13.786168 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-combined-ca-bundle\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.786489 master-0 kubenswrapper[33013]: I0313 11:13:13.786311 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-scripts\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.786489 master-0 kubenswrapper[33013]: I0313 11:13:13.786342 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e935de4-7311-4e53-8e37-fc54aac0c5df-logs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.786489 master-0 kubenswrapper[33013]: I0313 11:13:13.786362 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.786489 master-0 kubenswrapper[33013]: I0313 11:13:13.786387 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-config-data\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.786489 master-0 kubenswrapper[33013]: I0313 11:13:13.786453 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e935de4-7311-4e53-8e37-fc54aac0c5df-httpd-run\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.787029 master-0 kubenswrapper[33013]: I0313 11:13:13.787004 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrm2s\" (UniqueName: \"kubernetes.io/projected/786d0394-1427-40dd-a9c8-231d5bc3dde3-kube-api-access-wrm2s\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:13.824161 master-0 kubenswrapper[33013]: I0313 11:13:13.824098 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "786d0394-1427-40dd-a9c8-231d5bc3dde3" (UID: "786d0394-1427-40dd-a9c8-231d5bc3dde3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:13.849513 master-0 kubenswrapper[33013]: I0313 11:13:13.849354 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-config" (OuterVolumeSpecName: "config") pod "786d0394-1427-40dd-a9c8-231d5bc3dde3" (UID: "786d0394-1427-40dd-a9c8-231d5bc3dde3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:13.898633 master-0 kubenswrapper[33013]: I0313 11:13:13.898532 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "786d0394-1427-40dd-a9c8-231d5bc3dde3" (UID: "786d0394-1427-40dd-a9c8-231d5bc3dde3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:13.918162 master-0 kubenswrapper[33013]: I0313 11:13:13.915402 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "786d0394-1427-40dd-a9c8-231d5bc3dde3" (UID: "786d0394-1427-40dd-a9c8-231d5bc3dde3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:13.945738 master-0 kubenswrapper[33013]: I0313 11:13:13.943661 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr9z5\" (UniqueName: \"kubernetes.io/projected/2e935de4-7311-4e53-8e37-fc54aac0c5df-kube-api-access-mr9z5\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.945738 master-0 kubenswrapper[33013]: I0313 11:13:13.943788 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-combined-ca-bundle\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.945738 master-0 kubenswrapper[33013]: I0313 11:13:13.944074 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-scripts\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.945738 master-0 kubenswrapper[33013]: I0313 11:13:13.944123 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e935de4-7311-4e53-8e37-fc54aac0c5df-logs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.945738 master-0 kubenswrapper[33013]: I0313 11:13:13.944175 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.945738 master-0 kubenswrapper[33013]: I0313 11:13:13.944218 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-config-data\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.945738 master-0 kubenswrapper[33013]: I0313 11:13:13.944289 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e935de4-7311-4e53-8e37-fc54aac0c5df-httpd-run\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.945738 master-0 kubenswrapper[33013]: I0313 11:13:13.944470 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:13.945738 master-0 kubenswrapper[33013]: I0313 11:13:13.944484 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:13.945738 master-0 kubenswrapper[33013]: I0313 11:13:13.944497 33013 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:13.945738 master-0 kubenswrapper[33013]: I0313 11:13:13.944508 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:13.945738 master-0 kubenswrapper[33013]: I0313 11:13:13.945118 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e935de4-7311-4e53-8e37-fc54aac0c5df-httpd-run\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.946288 master-0 kubenswrapper[33013]: I0313 11:13:13.946121 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e935de4-7311-4e53-8e37-fc54aac0c5df-logs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.974204 master-0 kubenswrapper[33013]: I0313 11:13:13.965904 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr9z5\" (UniqueName: \"kubernetes.io/projected/2e935de4-7311-4e53-8e37-fc54aac0c5df-kube-api-access-mr9z5\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.974204 master-0 kubenswrapper[33013]: I0313 11:13:13.967534 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:13:13.974204 master-0 kubenswrapper[33013]: I0313 11:13:13.967576 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/946cdf3189fcbc367fb7e7cfd5e4aad164d151a73965b6f865a738752ef6bb2a/globalmount\"" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.974204 master-0 kubenswrapper[33013]: I0313 11:13:13.973481 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-scripts\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.986221 master-0 kubenswrapper[33013]: I0313 11:13:13.975710 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:13:13.986221 master-0 kubenswrapper[33013]: I0313 11:13:13.981888 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-combined-ca-bundle\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:13.996115 master-0 kubenswrapper[33013]: I0313 11:13:13.992451 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-config-data\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:14.027042 master-0 kubenswrapper[33013]: I0313 11:13:14.026923 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "786d0394-1427-40dd-a9c8-231d5bc3dde3" (UID: "786d0394-1427-40dd-a9c8-231d5bc3dde3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:14.069644 master-0 kubenswrapper[33013]: I0313 11:13:14.059366 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/786d0394-1427-40dd-a9c8-231d5bc3dde3-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:14.147640 master-0 kubenswrapper[33013]: I0313 11:13:14.139562 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:13:14.147640 master-0 kubenswrapper[33013]: E0313 11:13:14.141567 33013 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-87aa4-default-external-api-0" podUID="ae86f2d6-2fdf-49b2-85d3-f049c035a4c7" Mar 13 11:13:14.207662 master-0 kubenswrapper[33013]: I0313 11:13:14.207002 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:14.342672 master-0 kubenswrapper[33013]: I0313 11:13:14.342599 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:13:14.343468 master-0 kubenswrapper[33013]: E0313 11:13:14.343427 33013 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-87aa4-default-internal-api-0" podUID="2e935de4-7311-4e53-8e37-fc54aac0c5df" Mar 13 11:13:14.377857 master-0 kubenswrapper[33013]: I0313 11:13:14.377664 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-dns-svc\") pod \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " Mar 13 11:13:14.378069 master-0 kubenswrapper[33013]: I0313 11:13:14.377868 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-ovsdbserver-sb\") pod \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " Mar 13 11:13:14.378069 master-0 kubenswrapper[33013]: I0313 11:13:14.378025 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2vg9\" (UniqueName: \"kubernetes.io/projected/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-kube-api-access-k2vg9\") pod \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " Mar 13 11:13:14.378136 master-0 kubenswrapper[33013]: I0313 11:13:14.378089 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-config\") pod \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " Mar 13 11:13:14.378136 master-0 kubenswrapper[33013]: I0313 11:13:14.378122 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-dns-swift-storage-0\") pod \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " Mar 13 11:13:14.378201 master-0 kubenswrapper[33013]: I0313 11:13:14.378155 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-ovsdbserver-nb\") pod \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\" (UID: \"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b\") " Mar 13 11:13:14.409468 master-0 kubenswrapper[33013]: I0313 11:13:14.409370 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b" (UID: "1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:14.409936 master-0 kubenswrapper[33013]: I0313 11:13:14.409880 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-kube-api-access-k2vg9" (OuterVolumeSpecName: "kube-api-access-k2vg9") pod "1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b" (UID: "1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b"). InnerVolumeSpecName "kube-api-access-k2vg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:14.431244 master-0 kubenswrapper[33013]: I0313 11:13:14.431149 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b" (UID: "1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:14.443137 master-0 kubenswrapper[33013]: I0313 11:13:14.442047 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-config" (OuterVolumeSpecName: "config") pod "1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b" (UID: "1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:14.483722 master-0 kubenswrapper[33013]: I0313 11:13:14.482214 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:14.483722 master-0 kubenswrapper[33013]: I0313 11:13:14.482266 33013 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:14.483722 master-0 kubenswrapper[33013]: I0313 11:13:14.482281 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:14.483722 master-0 kubenswrapper[33013]: I0313 11:13:14.482297 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2vg9\" (UniqueName: \"kubernetes.io/projected/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-kube-api-access-k2vg9\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:14.485330 master-0 kubenswrapper[33013]: I0313 11:13:14.484416 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b" (UID: "1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:14.512031 master-0 kubenswrapper[33013]: I0313 11:13:14.511973 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b" (UID: "1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:14.560034 master-0 kubenswrapper[33013]: I0313 11:13:14.556739 33013 generic.go:334] "Generic (PLEG): container finished" podID="65b901e4-e1c4-41bf-8083-31d19c301c44" containerID="ac2ab892080c57772f3ee958aad89a10dbe5286091b115cabeec1c4fb79e6710" exitCode=0 Mar 13 11:13:14.560034 master-0 kubenswrapper[33013]: I0313 11:13:14.556822 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-w9jfw" event={"ID":"65b901e4-e1c4-41bf-8083-31d19c301c44","Type":"ContainerDied","Data":"ac2ab892080c57772f3ee958aad89a10dbe5286091b115cabeec1c4fb79e6710"} Mar 13 11:13:14.573664 master-0 kubenswrapper[33013]: I0313 11:13:14.567475 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" event={"ID":"1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b","Type":"ContainerDied","Data":"b60ce003b0aba736bc427368c7e0d8f70b62b738ad3c60b0488aa87ffecdbd97"} Mar 13 11:13:14.573664 master-0 kubenswrapper[33013]: I0313 11:13:14.567544 33013 scope.go:117] "RemoveContainer" containerID="9ada0fc621fc01c3de8cd6385e3cb714c59095ea74a0a335231d7460dde4f43e" Mar 13 11:13:14.573664 master-0 kubenswrapper[33013]: I0313 11:13:14.567687 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-974fc7ff5-d7sq9" Mar 13 11:13:14.601947 master-0 kubenswrapper[33013]: I0313 11:13:14.596503 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:14.601947 master-0 kubenswrapper[33013]: I0313 11:13:14.596564 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:14.603309 master-0 kubenswrapper[33013]: I0313 11:13:14.602395 33013 generic.go:334] "Generic (PLEG): container finished" podID="e5137299-9cd3-46e0-9689-4416b06029db" containerID="f8a43fe1ddc91b52f91b0ee9e6d62dbfe00a0b9a8cb023da4ae58b9e602c364a" exitCode=0 Mar 13 11:13:14.603309 master-0 kubenswrapper[33013]: I0313 11:13:14.602518 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" event={"ID":"e5137299-9cd3-46e0-9689-4416b06029db","Type":"ContainerDied","Data":"f8a43fe1ddc91b52f91b0ee9e6d62dbfe00a0b9a8cb023da4ae58b9e602c364a"} Mar 13 11:13:14.618880 master-0 kubenswrapper[33013]: I0313 11:13:14.616844 33013 generic.go:334] "Generic (PLEG): container finished" podID="63be249f-23c2-4c9a-a6f3-3f9355da4f66" containerID="cc9e74126f6e28c2405e63503ed9bf16a1aedf9db3a5871620e6246703f63ccd" exitCode=0 Mar 13 11:13:14.618880 master-0 kubenswrapper[33013]: I0313 11:13:14.616981 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:14.618880 master-0 kubenswrapper[33013]: I0313 11:13:14.616999 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-17f4-account-create-update-pgqvs" event={"ID":"63be249f-23c2-4c9a-a6f3-3f9355da4f66","Type":"ContainerDied","Data":"cc9e74126f6e28c2405e63503ed9bf16a1aedf9db3a5871620e6246703f63ccd"} Mar 13 11:13:14.618880 master-0 kubenswrapper[33013]: I0313 11:13:14.617251 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:14.618880 master-0 kubenswrapper[33013]: I0313 11:13:14.617391 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68c5dd5fdf-lff8l" Mar 13 11:13:14.652168 master-0 kubenswrapper[33013]: I0313 11:13:14.651958 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:14.700412 master-0 kubenswrapper[33013]: I0313 11:13:14.697305 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-config-data\") pod \"2e935de4-7311-4e53-8e37-fc54aac0c5df\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " Mar 13 11:13:14.700412 master-0 kubenswrapper[33013]: I0313 11:13:14.697463 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr9z5\" (UniqueName: \"kubernetes.io/projected/2e935de4-7311-4e53-8e37-fc54aac0c5df-kube-api-access-mr9z5\") pod \"2e935de4-7311-4e53-8e37-fc54aac0c5df\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " Mar 13 11:13:14.700412 master-0 kubenswrapper[33013]: I0313 11:13:14.697572 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-combined-ca-bundle\") pod \"2e935de4-7311-4e53-8e37-fc54aac0c5df\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " Mar 13 11:13:14.700412 master-0 kubenswrapper[33013]: I0313 11:13:14.697613 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-scripts\") pod \"2e935de4-7311-4e53-8e37-fc54aac0c5df\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " Mar 13 11:13:14.700412 master-0 kubenswrapper[33013]: I0313 11:13:14.697677 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e935de4-7311-4e53-8e37-fc54aac0c5df-httpd-run\") pod \"2e935de4-7311-4e53-8e37-fc54aac0c5df\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " Mar 13 11:13:14.700412 master-0 kubenswrapper[33013]: I0313 11:13:14.697721 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e935de4-7311-4e53-8e37-fc54aac0c5df-logs\") pod \"2e935de4-7311-4e53-8e37-fc54aac0c5df\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " Mar 13 11:13:14.721618 master-0 kubenswrapper[33013]: I0313 11:13:14.706159 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e935de4-7311-4e53-8e37-fc54aac0c5df-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2e935de4-7311-4e53-8e37-fc54aac0c5df" (UID: "2e935de4-7311-4e53-8e37-fc54aac0c5df"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:13:14.721618 master-0 kubenswrapper[33013]: I0313 11:13:14.706366 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e935de4-7311-4e53-8e37-fc54aac0c5df-logs" (OuterVolumeSpecName: "logs") pod "2e935de4-7311-4e53-8e37-fc54aac0c5df" (UID: "2e935de4-7311-4e53-8e37-fc54aac0c5df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:13:14.801894 master-0 kubenswrapper[33013]: I0313 11:13:14.801748 33013 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e935de4-7311-4e53-8e37-fc54aac0c5df-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:14.801894 master-0 kubenswrapper[33013]: I0313 11:13:14.801810 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e935de4-7311-4e53-8e37-fc54aac0c5df-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:14.819113 master-0 kubenswrapper[33013]: I0313 11:13:14.816705 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:14.819113 master-0 kubenswrapper[33013]: I0313 11:13:14.818120 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:14.820364 master-0 kubenswrapper[33013]: I0313 11:13:14.820284 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-config-data" (OuterVolumeSpecName: "config-data") pod "2e935de4-7311-4e53-8e37-fc54aac0c5df" (UID: "2e935de4-7311-4e53-8e37-fc54aac0c5df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:14.832514 master-0 kubenswrapper[33013]: I0313 11:13:14.832412 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-scripts" (OuterVolumeSpecName: "scripts") pod "2e935de4-7311-4e53-8e37-fc54aac0c5df" (UID: "2e935de4-7311-4e53-8e37-fc54aac0c5df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:14.832819 master-0 kubenswrapper[33013]: I0313 11:13:14.832577 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e935de4-7311-4e53-8e37-fc54aac0c5df-kube-api-access-mr9z5" (OuterVolumeSpecName: "kube-api-access-mr9z5") pod "2e935de4-7311-4e53-8e37-fc54aac0c5df" (UID: "2e935de4-7311-4e53-8e37-fc54aac0c5df"). InnerVolumeSpecName "kube-api-access-mr9z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:14.833487 master-0 kubenswrapper[33013]: I0313 11:13:14.833398 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e935de4-7311-4e53-8e37-fc54aac0c5df" (UID: "2e935de4-7311-4e53-8e37-fc54aac0c5df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:14.924940 master-0 kubenswrapper[33013]: I0313 11:13:14.910794 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:14.924940 master-0 kubenswrapper[33013]: I0313 11:13:14.910848 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr9z5\" (UniqueName: \"kubernetes.io/projected/2e935de4-7311-4e53-8e37-fc54aac0c5df-kube-api-access-mr9z5\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:14.924940 master-0 kubenswrapper[33013]: I0313 11:13:14.910861 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:14.924940 master-0 kubenswrapper[33013]: I0313 11:13:14.910875 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e935de4-7311-4e53-8e37-fc54aac0c5df-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:14.960640 master-0 kubenswrapper[33013]: I0313 11:13:14.960550 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-974fc7ff5-d7sq9"] Mar 13 11:13:14.992752 master-0 kubenswrapper[33013]: I0313 11:13:14.990706 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-974fc7ff5-d7sq9"] Mar 13 11:13:15.019177 master-0 kubenswrapper[33013]: I0313 11:13:15.013371 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjhj6\" (UniqueName: \"kubernetes.io/projected/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-kube-api-access-xjhj6\") pod \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " Mar 13 11:13:15.019177 master-0 kubenswrapper[33013]: I0313 11:13:15.013640 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " Mar 13 11:13:15.019177 master-0 kubenswrapper[33013]: I0313 11:13:15.013677 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-combined-ca-bundle\") pod \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " Mar 13 11:13:15.019177 master-0 kubenswrapper[33013]: I0313 11:13:15.013727 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-httpd-run\") pod \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " Mar 13 11:13:15.019177 master-0 kubenswrapper[33013]: I0313 11:13:15.013756 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-scripts\") pod \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " Mar 13 11:13:15.019177 master-0 kubenswrapper[33013]: I0313 11:13:15.013824 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-config-data\") pod \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " Mar 13 11:13:15.019177 master-0 kubenswrapper[33013]: I0313 11:13:15.013854 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-logs\") pod \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\" (UID: \"ae86f2d6-2fdf-49b2-85d3-f049c035a4c7\") " Mar 13 11:13:15.019177 master-0 kubenswrapper[33013]: I0313 11:13:15.014329 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-logs" (OuterVolumeSpecName: "logs") pod "ae86f2d6-2fdf-49b2-85d3-f049c035a4c7" (UID: "ae86f2d6-2fdf-49b2-85d3-f049c035a4c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:13:15.019177 master-0 kubenswrapper[33013]: I0313 11:13:15.015083 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ae86f2d6-2fdf-49b2-85d3-f049c035a4c7" (UID: "ae86f2d6-2fdf-49b2-85d3-f049c035a4c7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:13:15.019177 master-0 kubenswrapper[33013]: I0313 11:13:15.015614 33013 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:15.019177 master-0 kubenswrapper[33013]: I0313 11:13:15.015635 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:15.036729 master-0 kubenswrapper[33013]: I0313 11:13:15.022013 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-kube-api-access-xjhj6" (OuterVolumeSpecName: "kube-api-access-xjhj6") pod "ae86f2d6-2fdf-49b2-85d3-f049c035a4c7" (UID: "ae86f2d6-2fdf-49b2-85d3-f049c035a4c7"). InnerVolumeSpecName "kube-api-access-xjhj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:15.036729 master-0 kubenswrapper[33013]: I0313 11:13:15.030148 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68c5dd5fdf-lff8l"] Mar 13 11:13:15.047342 master-0 kubenswrapper[33013]: I0313 11:13:15.045076 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-scripts" (OuterVolumeSpecName: "scripts") pod "ae86f2d6-2fdf-49b2-85d3-f049c035a4c7" (UID: "ae86f2d6-2fdf-49b2-85d3-f049c035a4c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:15.050618 master-0 kubenswrapper[33013]: I0313 11:13:15.050541 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae86f2d6-2fdf-49b2-85d3-f049c035a4c7" (UID: "ae86f2d6-2fdf-49b2-85d3-f049c035a4c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:15.064826 master-0 kubenswrapper[33013]: I0313 11:13:15.064756 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-config-data" (OuterVolumeSpecName: "config-data") pod "ae86f2d6-2fdf-49b2-85d3-f049c035a4c7" (UID: "ae86f2d6-2fdf-49b2-85d3-f049c035a4c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:15.068854 master-0 kubenswrapper[33013]: I0313 11:13:15.068736 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68c5dd5fdf-lff8l"] Mar 13 11:13:15.118300 master-0 kubenswrapper[33013]: I0313 11:13:15.118219 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjhj6\" (UniqueName: \"kubernetes.io/projected/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-kube-api-access-xjhj6\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:15.118300 master-0 kubenswrapper[33013]: I0313 11:13:15.118262 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:15.118300 master-0 kubenswrapper[33013]: I0313 11:13:15.118273 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:15.118300 master-0 kubenswrapper[33013]: I0313 11:13:15.118284 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:15.637373 master-0 kubenswrapper[33013]: I0313 11:13:15.637314 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:15.637968 master-0 kubenswrapper[33013]: I0313 11:13:15.637736 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" event={"ID":"e5137299-9cd3-46e0-9689-4416b06029db","Type":"ContainerStarted","Data":"a5e2bfc5e6a076e2ec2afcf0d059532ea53005041d4e5c7f2d32740bd0be3c66"} Mar 13 11:13:15.637968 master-0 kubenswrapper[33013]: I0313 11:13:15.637782 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:15.638393 master-0 kubenswrapper[33013]: I0313 11:13:15.638359 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:15.689909 master-0 kubenswrapper[33013]: I0313 11:13:15.689825 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" podStartSLOduration=5.689803023 podStartE2EDuration="5.689803023s" podCreationTimestamp="2026-03-13 11:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:15.6889691 +0000 UTC m=+979.164922449" watchObservedRunningTime="2026-03-13 11:13:15.689803023 +0000 UTC m=+979.165756372" Mar 13 11:13:15.851706 master-0 kubenswrapper[33013]: I0313 11:13:15.845861 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:13:15.866854 master-0 kubenswrapper[33013]: I0313 11:13:15.866780 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:13:15.878451 master-0 kubenswrapper[33013]: I0313 11:13:15.878400 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:13:15.879610 master-0 kubenswrapper[33013]: E0313 11:13:15.879573 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b" containerName="init" Mar 13 11:13:15.879746 master-0 kubenswrapper[33013]: I0313 11:13:15.879718 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b" containerName="init" Mar 13 11:13:15.880438 master-0 kubenswrapper[33013]: I0313 11:13:15.880407 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b" containerName="init" Mar 13 11:13:15.882478 master-0 kubenswrapper[33013]: I0313 11:13:15.882456 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:15.890192 master-0 kubenswrapper[33013]: I0313 11:13:15.890135 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-87aa4-default-internal-config-data" Mar 13 11:13:15.898269 master-0 kubenswrapper[33013]: I0313 11:13:15.893537 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:13:15.993873 master-0 kubenswrapper[33013]: I0313 11:13:15.993812 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-config-data\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:15.993873 master-0 kubenswrapper[33013]: I0313 11:13:15.993876 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-combined-ca-bundle\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:15.994160 master-0 kubenswrapper[33013]: I0313 11:13:15.994054 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-scripts\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:15.994160 master-0 kubenswrapper[33013]: I0313 11:13:15.994103 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0439b27d-bb04-467e-abc2-e155fa98d499-httpd-run\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:15.994364 master-0 kubenswrapper[33013]: I0313 11:13:15.994253 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nktlp\" (UniqueName: \"kubernetes.io/projected/0439b27d-bb04-467e-abc2-e155fa98d499-kube-api-access-nktlp\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:15.994562 master-0 kubenswrapper[33013]: I0313 11:13:15.994522 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0439b27d-bb04-467e-abc2-e155fa98d499-logs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.096441 master-0 kubenswrapper[33013]: I0313 11:13:16.096388 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-scripts\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.096761 master-0 kubenswrapper[33013]: I0313 11:13:16.096515 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0439b27d-bb04-467e-abc2-e155fa98d499-httpd-run\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.096761 master-0 kubenswrapper[33013]: I0313 11:13:16.096676 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nktlp\" (UniqueName: \"kubernetes.io/projected/0439b27d-bb04-467e-abc2-e155fa98d499-kube-api-access-nktlp\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.096839 master-0 kubenswrapper[33013]: I0313 11:13:16.096815 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0439b27d-bb04-467e-abc2-e155fa98d499-logs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.096915 master-0 kubenswrapper[33013]: I0313 11:13:16.096879 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-config-data\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.097069 master-0 kubenswrapper[33013]: I0313 11:13:16.097047 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-combined-ca-bundle\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.102845 master-0 kubenswrapper[33013]: I0313 11:13:16.100489 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0439b27d-bb04-467e-abc2-e155fa98d499-httpd-run\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.102845 master-0 kubenswrapper[33013]: I0313 11:13:16.101466 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-combined-ca-bundle\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.102845 master-0 kubenswrapper[33013]: I0313 11:13:16.101687 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0439b27d-bb04-467e-abc2-e155fa98d499-logs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.104289 master-0 kubenswrapper[33013]: I0313 11:13:16.104236 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-scripts\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.118271 master-0 kubenswrapper[33013]: I0313 11:13:16.118198 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-config-data\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.125012 master-0 kubenswrapper[33013]: I0313 11:13:16.123734 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nktlp\" (UniqueName: \"kubernetes.io/projected/0439b27d-bb04-467e-abc2-e155fa98d499-kube-api-access-nktlp\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.370028 master-0 kubenswrapper[33013]: I0313 11:13:16.366768 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-17f4-account-create-update-pgqvs" Mar 13 11:13:16.378149 master-0 kubenswrapper[33013]: I0313 11:13:16.378107 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-w9jfw" Mar 13 11:13:16.386623 master-0 kubenswrapper[33013]: I0313 11:13:16.386552 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78" (OuterVolumeSpecName: "glance") pod "ae86f2d6-2fdf-49b2-85d3-f049c035a4c7" (UID: "ae86f2d6-2fdf-49b2-85d3-f049c035a4c7"). InnerVolumeSpecName "pvc-4701fe27-d49b-425e-b633-bef2656c1d02". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 11:13:16.409727 master-0 kubenswrapper[33013]: I0313 11:13:16.393796 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.416252 master-0 kubenswrapper[33013]: I0313 11:13:16.416214 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"2e935de4-7311-4e53-8e37-fc54aac0c5df\" (UID: \"2e935de4-7311-4e53-8e37-fc54aac0c5df\") " Mar 13 11:13:16.417027 master-0 kubenswrapper[33013]: I0313 11:13:16.417008 33013 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") on node \"master-0\" " Mar 13 11:13:16.459704 master-0 kubenswrapper[33013]: I0313 11:13:16.454611 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c" (OuterVolumeSpecName: "glance") pod "2e935de4-7311-4e53-8e37-fc54aac0c5df" (UID: "2e935de4-7311-4e53-8e37-fc54aac0c5df"). InnerVolumeSpecName "pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 11:13:16.480882 master-0 kubenswrapper[33013]: I0313 11:13:16.479910 33013 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 13 11:13:16.480882 master-0 kubenswrapper[33013]: I0313 11:13:16.480159 33013 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4701fe27-d49b-425e-b633-bef2656c1d02" (UniqueName: "kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78") on node "master-0" Mar 13 11:13:16.518761 master-0 kubenswrapper[33013]: I0313 11:13:16.518672 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf7zg\" (UniqueName: \"kubernetes.io/projected/65b901e4-e1c4-41bf-8083-31d19c301c44-kube-api-access-jf7zg\") pod \"65b901e4-e1c4-41bf-8083-31d19c301c44\" (UID: \"65b901e4-e1c4-41bf-8083-31d19c301c44\") " Mar 13 11:13:16.519024 master-0 kubenswrapper[33013]: I0313 11:13:16.518930 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b901e4-e1c4-41bf-8083-31d19c301c44-operator-scripts\") pod \"65b901e4-e1c4-41bf-8083-31d19c301c44\" (UID: \"65b901e4-e1c4-41bf-8083-31d19c301c44\") " Mar 13 11:13:16.519125 master-0 kubenswrapper[33013]: I0313 11:13:16.519099 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63be249f-23c2-4c9a-a6f3-3f9355da4f66-operator-scripts\") pod \"63be249f-23c2-4c9a-a6f3-3f9355da4f66\" (UID: \"63be249f-23c2-4c9a-a6f3-3f9355da4f66\") " Mar 13 11:13:16.519184 master-0 kubenswrapper[33013]: I0313 11:13:16.519158 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt55k\" (UniqueName: \"kubernetes.io/projected/63be249f-23c2-4c9a-a6f3-3f9355da4f66-kube-api-access-nt55k\") pod \"63be249f-23c2-4c9a-a6f3-3f9355da4f66\" (UID: \"63be249f-23c2-4c9a-a6f3-3f9355da4f66\") " Mar 13 11:13:16.519499 master-0 kubenswrapper[33013]: I0313 11:13:16.519475 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:16.519801 master-0 kubenswrapper[33013]: I0313 11:13:16.519763 33013 reconciler_common.go:293] "Volume detached for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:16.526673 master-0 kubenswrapper[33013]: I0313 11:13:16.526137 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b901e4-e1c4-41bf-8083-31d19c301c44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65b901e4-e1c4-41bf-8083-31d19c301c44" (UID: "65b901e4-e1c4-41bf-8083-31d19c301c44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:16.527032 master-0 kubenswrapper[33013]: I0313 11:13:16.526195 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b901e4-e1c4-41bf-8083-31d19c301c44-kube-api-access-jf7zg" (OuterVolumeSpecName: "kube-api-access-jf7zg") pod "65b901e4-e1c4-41bf-8083-31d19c301c44" (UID: "65b901e4-e1c4-41bf-8083-31d19c301c44"). InnerVolumeSpecName "kube-api-access-jf7zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:16.527772 master-0 kubenswrapper[33013]: I0313 11:13:16.527709 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63be249f-23c2-4c9a-a6f3-3f9355da4f66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63be249f-23c2-4c9a-a6f3-3f9355da4f66" (UID: "63be249f-23c2-4c9a-a6f3-3f9355da4f66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:16.544217 master-0 kubenswrapper[33013]: I0313 11:13:16.538833 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63be249f-23c2-4c9a-a6f3-3f9355da4f66-kube-api-access-nt55k" (OuterVolumeSpecName: "kube-api-access-nt55k") pod "63be249f-23c2-4c9a-a6f3-3f9355da4f66" (UID: "63be249f-23c2-4c9a-a6f3-3f9355da4f66"). InnerVolumeSpecName "kube-api-access-nt55k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:16.634737 master-0 kubenswrapper[33013]: I0313 11:13:16.630028 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b901e4-e1c4-41bf-8083-31d19c301c44-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:16.634737 master-0 kubenswrapper[33013]: I0313 11:13:16.630121 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63be249f-23c2-4c9a-a6f3-3f9355da4f66-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:16.634737 master-0 kubenswrapper[33013]: I0313 11:13:16.630141 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt55k\" (UniqueName: \"kubernetes.io/projected/63be249f-23c2-4c9a-a6f3-3f9355da4f66-kube-api-access-nt55k\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:16.634737 master-0 kubenswrapper[33013]: I0313 11:13:16.630156 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf7zg\" (UniqueName: \"kubernetes.io/projected/65b901e4-e1c4-41bf-8083-31d19c301c44-kube-api-access-jf7zg\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:16.681691 master-0 kubenswrapper[33013]: I0313 11:13:16.679923 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-w9jfw" Mar 13 11:13:16.700625 master-0 kubenswrapper[33013]: I0313 11:13:16.692216 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-w9jfw" event={"ID":"65b901e4-e1c4-41bf-8083-31d19c301c44","Type":"ContainerDied","Data":"76fa770ee9e83f4818227faeedd970e382c1b8850b1d43e171ca7396040a3174"} Mar 13 11:13:16.700625 master-0 kubenswrapper[33013]: I0313 11:13:16.692329 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76fa770ee9e83f4818227faeedd970e382c1b8850b1d43e171ca7396040a3174" Mar 13 11:13:16.736274 master-0 kubenswrapper[33013]: I0313 11:13:16.736159 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-17f4-account-create-update-pgqvs" Mar 13 11:13:16.777839 master-0 kubenswrapper[33013]: I0313 11:13:16.777706 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b" path="/var/lib/kubelet/pods/1dec3175-2d90-4bf4-9dc7-fcc0ab28bc8b/volumes" Mar 13 11:13:16.778794 master-0 kubenswrapper[33013]: I0313 11:13:16.778775 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e935de4-7311-4e53-8e37-fc54aac0c5df" path="/var/lib/kubelet/pods/2e935de4-7311-4e53-8e37-fc54aac0c5df/volumes" Mar 13 11:13:16.779818 master-0 kubenswrapper[33013]: I0313 11:13:16.779802 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="786d0394-1427-40dd-a9c8-231d5bc3dde3" path="/var/lib/kubelet/pods/786d0394-1427-40dd-a9c8-231d5bc3dde3/volumes" Mar 13 11:13:16.783805 master-0 kubenswrapper[33013]: I0313 11:13:16.783774 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:13:16.786059 master-0 kubenswrapper[33013]: I0313 11:13:16.785999 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-17f4-account-create-update-pgqvs" event={"ID":"63be249f-23c2-4c9a-a6f3-3f9355da4f66","Type":"ContainerDied","Data":"92568b129e8c80b6906eec3765e15dcc01b694cb134f6b37a464fbb7db325733"} Mar 13 11:13:16.787548 master-0 kubenswrapper[33013]: I0313 11:13:16.787523 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92568b129e8c80b6906eec3765e15dcc01b694cb134f6b37a464fbb7db325733" Mar 13 11:13:16.797871 master-0 kubenswrapper[33013]: I0313 11:13:16.797755 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:13:16.831358 master-0 kubenswrapper[33013]: I0313 11:13:16.826933 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:13:16.831358 master-0 kubenswrapper[33013]: E0313 11:13:16.827544 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b901e4-e1c4-41bf-8083-31d19c301c44" containerName="mariadb-database-create" Mar 13 11:13:16.831358 master-0 kubenswrapper[33013]: I0313 11:13:16.827562 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b901e4-e1c4-41bf-8083-31d19c301c44" containerName="mariadb-database-create" Mar 13 11:13:16.831358 master-0 kubenswrapper[33013]: E0313 11:13:16.827637 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63be249f-23c2-4c9a-a6f3-3f9355da4f66" containerName="mariadb-account-create-update" Mar 13 11:13:16.831358 master-0 kubenswrapper[33013]: I0313 11:13:16.827647 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="63be249f-23c2-4c9a-a6f3-3f9355da4f66" containerName="mariadb-account-create-update" Mar 13 11:13:16.831358 master-0 kubenswrapper[33013]: I0313 11:13:16.827891 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="63be249f-23c2-4c9a-a6f3-3f9355da4f66" containerName="mariadb-account-create-update" Mar 13 11:13:16.831358 master-0 kubenswrapper[33013]: I0313 11:13:16.827937 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b901e4-e1c4-41bf-8083-31d19c301c44" containerName="mariadb-database-create" Mar 13 11:13:16.831358 master-0 kubenswrapper[33013]: I0313 11:13:16.829224 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:16.856619 master-0 kubenswrapper[33013]: I0313 11:13:16.842020 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-87aa4-default-external-config-data" Mar 13 11:13:16.861609 master-0 kubenswrapper[33013]: I0313 11:13:16.859323 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:13:16.963645 master-0 kubenswrapper[33013]: I0313 11:13:16.963355 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjt2z\" (UniqueName: \"kubernetes.io/projected/14de46f4-84c3-4a39-ad37-3c8d3486a657-kube-api-access-tjt2z\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:16.963645 master-0 kubenswrapper[33013]: I0313 11:13:16.963419 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:16.963645 master-0 kubenswrapper[33013]: I0313 11:13:16.963473 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14de46f4-84c3-4a39-ad37-3c8d3486a657-logs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:16.963645 master-0 kubenswrapper[33013]: I0313 11:13:16.963504 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-scripts\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:16.965028 master-0 kubenswrapper[33013]: I0313 11:13:16.964633 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-config-data\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:16.965028 master-0 kubenswrapper[33013]: I0313 11:13:16.964725 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-combined-ca-bundle\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:16.965028 master-0 kubenswrapper[33013]: I0313 11:13:16.964943 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14de46f4-84c3-4a39-ad37-3c8d3486a657-httpd-run\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.068455 master-0 kubenswrapper[33013]: I0313 11:13:17.068357 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjt2z\" (UniqueName: \"kubernetes.io/projected/14de46f4-84c3-4a39-ad37-3c8d3486a657-kube-api-access-tjt2z\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.068455 master-0 kubenswrapper[33013]: I0313 11:13:17.068438 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.068785 master-0 kubenswrapper[33013]: I0313 11:13:17.068525 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14de46f4-84c3-4a39-ad37-3c8d3486a657-logs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.068785 master-0 kubenswrapper[33013]: I0313 11:13:17.068566 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-scripts\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.068785 master-0 kubenswrapper[33013]: I0313 11:13:17.068690 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-config-data\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.068785 master-0 kubenswrapper[33013]: I0313 11:13:17.068745 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-combined-ca-bundle\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.069087 master-0 kubenswrapper[33013]: I0313 11:13:17.068992 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14de46f4-84c3-4a39-ad37-3c8d3486a657-httpd-run\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.069819 master-0 kubenswrapper[33013]: I0313 11:13:17.069415 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14de46f4-84c3-4a39-ad37-3c8d3486a657-logs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.072375 master-0 kubenswrapper[33013]: I0313 11:13:17.069796 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14de46f4-84c3-4a39-ad37-3c8d3486a657-httpd-run\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.082936 master-0 kubenswrapper[33013]: I0313 11:13:17.077463 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-combined-ca-bundle\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.082936 master-0 kubenswrapper[33013]: I0313 11:13:17.079357 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:13:17.082936 master-0 kubenswrapper[33013]: I0313 11:13:17.079418 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/02d92e594b7cf20d10752edde97d9397ac0766c013b947c8de1147a201f75769/globalmount\"" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.082936 master-0 kubenswrapper[33013]: I0313 11:13:17.082628 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-scripts\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.084534 master-0 kubenswrapper[33013]: I0313 11:13:17.084489 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-config-data\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.093779 master-0 kubenswrapper[33013]: I0313 11:13:17.093697 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjt2z\" (UniqueName: \"kubernetes.io/projected/14de46f4-84c3-4a39-ad37-3c8d3486a657-kube-api-access-tjt2z\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:17.761267 master-0 kubenswrapper[33013]: I0313 11:13:17.761214 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:18.030289 master-0 kubenswrapper[33013]: I0313 11:13:18.030226 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:18.741210 master-0 kubenswrapper[33013]: I0313 11:13:18.738476 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae86f2d6-2fdf-49b2-85d3-f049c035a4c7" path="/var/lib/kubelet/pods/ae86f2d6-2fdf-49b2-85d3-f049c035a4c7/volumes" Mar 13 11:13:19.154780 master-0 kubenswrapper[33013]: I0313 11:13:19.154692 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:13:19.155538 master-0 kubenswrapper[33013]: E0313 11:13:19.155509 33013 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-87aa4-default-external-api-0" podUID="14de46f4-84c3-4a39-ad37-3c8d3486a657" Mar 13 11:13:19.196442 master-0 kubenswrapper[33013]: I0313 11:13:19.195916 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:19.294256 master-0 kubenswrapper[33013]: I0313 11:13:19.294153 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:13:19.768565 master-0 kubenswrapper[33013]: I0313 11:13:19.768502 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2j64b" event={"ID":"240bf9bb-a4f9-4b00-9f3b-da8db52d618a","Type":"ContainerStarted","Data":"c6724595de114d87b0a2ca3cb8008804b7047c715a2bb80212ddff89938bc3c2"} Mar 13 11:13:19.777720 master-0 kubenswrapper[33013]: I0313 11:13:19.775482 33013 generic.go:334] "Generic (PLEG): container finished" podID="df0c58fd-2f63-48b4-af91-0bbc1edcdcd5" containerID="b31c021ca0ad71c5cbd5655b2a563b3647021150402cf3e523799684f7cd9c4f" exitCode=0 Mar 13 11:13:19.777720 master-0 kubenswrapper[33013]: I0313 11:13:19.775607 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:19.777720 master-0 kubenswrapper[33013]: I0313 11:13:19.776550 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-phs4s" event={"ID":"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5","Type":"ContainerDied","Data":"b31c021ca0ad71c5cbd5655b2a563b3647021150402cf3e523799684f7cd9c4f"} Mar 13 11:13:19.796095 master-0 kubenswrapper[33013]: I0313 11:13:19.794741 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:13:19.813205 master-0 kubenswrapper[33013]: I0313 11:13:19.811524 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2j64b" podStartSLOduration=3.263930182 podStartE2EDuration="9.811502524s" podCreationTimestamp="2026-03-13 11:13:10 +0000 UTC" firstStartedPulling="2026-03-13 11:13:12.682737776 +0000 UTC m=+976.158691125" lastFinishedPulling="2026-03-13 11:13:19.230310118 +0000 UTC m=+982.706263467" observedRunningTime="2026-03-13 11:13:19.792534718 +0000 UTC m=+983.268488067" watchObservedRunningTime="2026-03-13 11:13:19.811502524 +0000 UTC m=+983.287455873" Mar 13 11:13:19.813662 master-0 kubenswrapper[33013]: I0313 11:13:19.813622 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:19.849201 master-0 kubenswrapper[33013]: I0313 11:13:19.849127 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14de46f4-84c3-4a39-ad37-3c8d3486a657-httpd-run\") pod \"14de46f4-84c3-4a39-ad37-3c8d3486a657\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " Mar 13 11:13:19.849473 master-0 kubenswrapper[33013]: I0313 11:13:19.849224 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-scripts\") pod \"14de46f4-84c3-4a39-ad37-3c8d3486a657\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " Mar 13 11:13:19.849473 master-0 kubenswrapper[33013]: I0313 11:13:19.849271 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-config-data\") pod \"14de46f4-84c3-4a39-ad37-3c8d3486a657\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " Mar 13 11:13:19.849571 master-0 kubenswrapper[33013]: I0313 11:13:19.849534 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"14de46f4-84c3-4a39-ad37-3c8d3486a657\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " Mar 13 11:13:19.849682 master-0 kubenswrapper[33013]: I0313 11:13:19.849574 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-combined-ca-bundle\") pod \"14de46f4-84c3-4a39-ad37-3c8d3486a657\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " Mar 13 11:13:19.849730 master-0 kubenswrapper[33013]: I0313 11:13:19.849715 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjt2z\" (UniqueName: \"kubernetes.io/projected/14de46f4-84c3-4a39-ad37-3c8d3486a657-kube-api-access-tjt2z\") pod \"14de46f4-84c3-4a39-ad37-3c8d3486a657\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " Mar 13 11:13:19.850137 master-0 kubenswrapper[33013]: I0313 11:13:19.849770 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14de46f4-84c3-4a39-ad37-3c8d3486a657-logs\") pod \"14de46f4-84c3-4a39-ad37-3c8d3486a657\" (UID: \"14de46f4-84c3-4a39-ad37-3c8d3486a657\") " Mar 13 11:13:19.850531 master-0 kubenswrapper[33013]: I0313 11:13:19.850287 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14de46f4-84c3-4a39-ad37-3c8d3486a657-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "14de46f4-84c3-4a39-ad37-3c8d3486a657" (UID: "14de46f4-84c3-4a39-ad37-3c8d3486a657"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:13:19.850531 master-0 kubenswrapper[33013]: I0313 11:13:19.850475 33013 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14de46f4-84c3-4a39-ad37-3c8d3486a657-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:19.851205 master-0 kubenswrapper[33013]: I0313 11:13:19.850785 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14de46f4-84c3-4a39-ad37-3c8d3486a657-logs" (OuterVolumeSpecName: "logs") pod "14de46f4-84c3-4a39-ad37-3c8d3486a657" (UID: "14de46f4-84c3-4a39-ad37-3c8d3486a657"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:13:19.854259 master-0 kubenswrapper[33013]: I0313 11:13:19.854229 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14de46f4-84c3-4a39-ad37-3c8d3486a657-kube-api-access-tjt2z" (OuterVolumeSpecName: "kube-api-access-tjt2z") pod "14de46f4-84c3-4a39-ad37-3c8d3486a657" (UID: "14de46f4-84c3-4a39-ad37-3c8d3486a657"). InnerVolumeSpecName "kube-api-access-tjt2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:19.854394 master-0 kubenswrapper[33013]: I0313 11:13:19.854348 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14de46f4-84c3-4a39-ad37-3c8d3486a657" (UID: "14de46f4-84c3-4a39-ad37-3c8d3486a657"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:19.855054 master-0 kubenswrapper[33013]: I0313 11:13:19.855016 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-scripts" (OuterVolumeSpecName: "scripts") pod "14de46f4-84c3-4a39-ad37-3c8d3486a657" (UID: "14de46f4-84c3-4a39-ad37-3c8d3486a657"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:19.868762 master-0 kubenswrapper[33013]: I0313 11:13:19.867877 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-config-data" (OuterVolumeSpecName: "config-data") pod "14de46f4-84c3-4a39-ad37-3c8d3486a657" (UID: "14de46f4-84c3-4a39-ad37-3c8d3486a657"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:19.873166 master-0 kubenswrapper[33013]: I0313 11:13:19.873095 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78" (OuterVolumeSpecName: "glance") pod "14de46f4-84c3-4a39-ad37-3c8d3486a657" (UID: "14de46f4-84c3-4a39-ad37-3c8d3486a657"). InnerVolumeSpecName "pvc-4701fe27-d49b-425e-b633-bef2656c1d02". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 11:13:19.951687 master-0 kubenswrapper[33013]: I0313 11:13:19.951622 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14de46f4-84c3-4a39-ad37-3c8d3486a657-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:19.951687 master-0 kubenswrapper[33013]: I0313 11:13:19.951666 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:19.951687 master-0 kubenswrapper[33013]: I0313 11:13:19.951678 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:19.952086 master-0 kubenswrapper[33013]: I0313 11:13:19.951744 33013 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") on node \"master-0\" " Mar 13 11:13:19.952086 master-0 kubenswrapper[33013]: I0313 11:13:19.951757 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14de46f4-84c3-4a39-ad37-3c8d3486a657-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:19.952086 master-0 kubenswrapper[33013]: I0313 11:13:19.951771 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjt2z\" (UniqueName: \"kubernetes.io/projected/14de46f4-84c3-4a39-ad37-3c8d3486a657-kube-api-access-tjt2z\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:20.000608 master-0 kubenswrapper[33013]: I0313 11:13:20.000540 33013 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 13 11:13:20.000898 master-0 kubenswrapper[33013]: I0313 11:13:20.000752 33013 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4701fe27-d49b-425e-b633-bef2656c1d02" (UniqueName: "kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78") on node "master-0" Mar 13 11:13:20.053949 master-0 kubenswrapper[33013]: I0313 11:13:20.053888 33013 reconciler_common.go:293] "Volume detached for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:20.795097 master-0 kubenswrapper[33013]: I0313 11:13:20.794958 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:20.796123 master-0 kubenswrapper[33013]: I0313 11:13:20.796049 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"0439b27d-bb04-467e-abc2-e155fa98d499","Type":"ContainerStarted","Data":"1df512348c45be76c8e38e724ffb95204f212fa1f3b12f02dc63cd9fb8ca1c43"} Mar 13 11:13:20.796123 master-0 kubenswrapper[33013]: I0313 11:13:20.796100 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"0439b27d-bb04-467e-abc2-e155fa98d499","Type":"ContainerStarted","Data":"91ffdf8b2e31be8ef71ccf194e77959524ed1417c478358dc64d0407338baf7f"} Mar 13 11:13:20.874613 master-0 kubenswrapper[33013]: I0313 11:13:20.870336 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:13:20.883879 master-0 kubenswrapper[33013]: I0313 11:13:20.883801 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:13:20.976647 master-0 kubenswrapper[33013]: I0313 11:13:20.976465 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:13:20.980661 master-0 kubenswrapper[33013]: I0313 11:13:20.979428 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:20.995698 master-0 kubenswrapper[33013]: I0313 11:13:20.988377 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-87aa4-default-external-config-data" Mar 13 11:13:20.995698 master-0 kubenswrapper[33013]: I0313 11:13:20.992122 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:13:21.006646 master-0 kubenswrapper[33013]: I0313 11:13:21.004348 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 13 11:13:21.079644 master-0 kubenswrapper[33013]: I0313 11:13:21.075388 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-s6b9s"] Mar 13 11:13:21.079644 master-0 kubenswrapper[33013]: I0313 11:13:21.077565 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.083027 master-0 kubenswrapper[33013]: I0313 11:13:21.080853 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-scripts\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.083027 master-0 kubenswrapper[33013]: I0313 11:13:21.081021 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k2mb\" (UniqueName: \"kubernetes.io/projected/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-kube-api-access-7k2mb\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.083027 master-0 kubenswrapper[33013]: I0313 11:13:21.081061 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-config-data\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.083027 master-0 kubenswrapper[33013]: I0313 11:13:21.081693 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-etc-podinfo\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.083027 master-0 kubenswrapper[33013]: I0313 11:13:21.081803 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Mar 13 11:13:21.083027 master-0 kubenswrapper[33013]: I0313 11:13:21.081833 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-combined-ca-bundle\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.083027 master-0 kubenswrapper[33013]: I0313 11:13:21.081996 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-config-data-merged\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.096632 master-0 kubenswrapper[33013]: I0313 11:13:21.096431 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 13 11:13:21.130276 master-0 kubenswrapper[33013]: I0313 11:13:21.130229 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-s6b9s"] Mar 13 11:13:21.184477 master-0 kubenswrapper[33013]: I0313 11:13:21.183953 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-scripts\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.184477 master-0 kubenswrapper[33013]: I0313 11:13:21.184034 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-combined-ca-bundle\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.184477 master-0 kubenswrapper[33013]: I0313 11:13:21.184061 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-logs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.184477 master-0 kubenswrapper[33013]: I0313 11:13:21.184094 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnb7f\" (UniqueName: \"kubernetes.io/projected/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-kube-api-access-hnb7f\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.184477 master-0 kubenswrapper[33013]: I0313 11:13:21.184129 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-public-tls-certs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.184477 master-0 kubenswrapper[33013]: I0313 11:13:21.184166 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-config-data-merged\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.184477 master-0 kubenswrapper[33013]: I0313 11:13:21.184239 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-config-data\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.184477 master-0 kubenswrapper[33013]: I0313 11:13:21.184264 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-scripts\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.184477 master-0 kubenswrapper[33013]: I0313 11:13:21.184286 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-httpd-run\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.184477 master-0 kubenswrapper[33013]: I0313 11:13:21.184320 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.184477 master-0 kubenswrapper[33013]: I0313 11:13:21.184337 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-combined-ca-bundle\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.186012 master-0 kubenswrapper[33013]: I0313 11:13:21.184730 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k2mb\" (UniqueName: \"kubernetes.io/projected/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-kube-api-access-7k2mb\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.186012 master-0 kubenswrapper[33013]: I0313 11:13:21.184759 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-config-data\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.186012 master-0 kubenswrapper[33013]: I0313 11:13:21.184815 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-etc-podinfo\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.208109 master-0 kubenswrapper[33013]: I0313 11:13:21.189103 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-etc-podinfo\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.208109 master-0 kubenswrapper[33013]: I0313 11:13:21.195194 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-combined-ca-bundle\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.208109 master-0 kubenswrapper[33013]: I0313 11:13:21.195985 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-config-data-merged\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.208109 master-0 kubenswrapper[33013]: I0313 11:13:21.199918 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-scripts\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.209751 master-0 kubenswrapper[33013]: I0313 11:13:21.209626 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-config-data\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.221743 master-0 kubenswrapper[33013]: I0313 11:13:21.221602 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k2mb\" (UniqueName: \"kubernetes.io/projected/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-kube-api-access-7k2mb\") pod \"ironic-db-sync-s6b9s\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.287484 master-0 kubenswrapper[33013]: I0313 11:13:21.287444 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-scripts\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.287711 master-0 kubenswrapper[33013]: I0313 11:13:21.287696 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-logs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.287828 master-0 kubenswrapper[33013]: I0313 11:13:21.287796 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnb7f\" (UniqueName: \"kubernetes.io/projected/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-kube-api-access-hnb7f\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.287955 master-0 kubenswrapper[33013]: I0313 11:13:21.287941 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-public-tls-certs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.288133 master-0 kubenswrapper[33013]: I0313 11:13:21.288119 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-config-data\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.288563 master-0 kubenswrapper[33013]: I0313 11:13:21.288470 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-httpd-run\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.288945 master-0 kubenswrapper[33013]: I0313 11:13:21.288915 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.289212 master-0 kubenswrapper[33013]: I0313 11:13:21.289197 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-combined-ca-bundle\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.294444 master-0 kubenswrapper[33013]: I0313 11:13:21.294373 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-public-tls-certs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.296075 master-0 kubenswrapper[33013]: I0313 11:13:21.296035 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-logs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.296225 master-0 kubenswrapper[33013]: I0313 11:13:21.296192 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-httpd-run\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.298150 master-0 kubenswrapper[33013]: I0313 11:13:21.298109 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-combined-ca-bundle\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.303300 master-0 kubenswrapper[33013]: I0313 11:13:21.303254 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-scripts\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.303680 master-0 kubenswrapper[33013]: I0313 11:13:21.303651 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:13:21.303746 master-0 kubenswrapper[33013]: I0313 11:13:21.303723 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/02d92e594b7cf20d10752edde97d9397ac0766c013b947c8de1147a201f75769/globalmount\"" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.304095 master-0 kubenswrapper[33013]: I0313 11:13:21.304063 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-config-data\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.311220 master-0 kubenswrapper[33013]: I0313 11:13:21.311189 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnb7f\" (UniqueName: \"kubernetes.io/projected/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-kube-api-access-hnb7f\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:21.389550 master-0 kubenswrapper[33013]: I0313 11:13:21.389507 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:21.443234 master-0 kubenswrapper[33013]: I0313 11:13:21.443178 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:13:21.493746 master-0 kubenswrapper[33013]: I0313 11:13:21.492933 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hnww\" (UniqueName: \"kubernetes.io/projected/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-kube-api-access-5hnww\") pod \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " Mar 13 11:13:21.493746 master-0 kubenswrapper[33013]: I0313 11:13:21.493052 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-combined-ca-bundle\") pod \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " Mar 13 11:13:21.493746 master-0 kubenswrapper[33013]: I0313 11:13:21.493197 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-credential-keys\") pod \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " Mar 13 11:13:21.493746 master-0 kubenswrapper[33013]: I0313 11:13:21.493339 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-scripts\") pod \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " Mar 13 11:13:21.493746 master-0 kubenswrapper[33013]: I0313 11:13:21.493367 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-config-data\") pod \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " Mar 13 11:13:21.493746 master-0 kubenswrapper[33013]: I0313 11:13:21.493394 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-fernet-keys\") pod \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\" (UID: \"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5\") " Mar 13 11:13:21.500935 master-0 kubenswrapper[33013]: I0313 11:13:21.500855 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "df0c58fd-2f63-48b4-af91-0bbc1edcdcd5" (UID: "df0c58fd-2f63-48b4-af91-0bbc1edcdcd5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:21.512667 master-0 kubenswrapper[33013]: I0313 11:13:21.512305 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-scripts" (OuterVolumeSpecName: "scripts") pod "df0c58fd-2f63-48b4-af91-0bbc1edcdcd5" (UID: "df0c58fd-2f63-48b4-af91-0bbc1edcdcd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:21.512667 master-0 kubenswrapper[33013]: I0313 11:13:21.512355 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-kube-api-access-5hnww" (OuterVolumeSpecName: "kube-api-access-5hnww") pod "df0c58fd-2f63-48b4-af91-0bbc1edcdcd5" (UID: "df0c58fd-2f63-48b4-af91-0bbc1edcdcd5"). InnerVolumeSpecName "kube-api-access-5hnww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:21.513227 master-0 kubenswrapper[33013]: I0313 11:13:21.513183 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "df0c58fd-2f63-48b4-af91-0bbc1edcdcd5" (UID: "df0c58fd-2f63-48b4-af91-0bbc1edcdcd5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:21.548711 master-0 kubenswrapper[33013]: I0313 11:13:21.547425 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-config-data" (OuterVolumeSpecName: "config-data") pod "df0c58fd-2f63-48b4-af91-0bbc1edcdcd5" (UID: "df0c58fd-2f63-48b4-af91-0bbc1edcdcd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:21.548711 master-0 kubenswrapper[33013]: I0313 11:13:21.547603 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df0c58fd-2f63-48b4-af91-0bbc1edcdcd5" (UID: "df0c58fd-2f63-48b4-af91-0bbc1edcdcd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:21.599186 master-0 kubenswrapper[33013]: I0313 11:13:21.599092 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hnww\" (UniqueName: \"kubernetes.io/projected/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-kube-api-access-5hnww\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:21.599186 master-0 kubenswrapper[33013]: I0313 11:13:21.599167 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:21.599186 master-0 kubenswrapper[33013]: I0313 11:13:21.599184 33013 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:21.599186 master-0 kubenswrapper[33013]: I0313 11:13:21.599195 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:21.599186 master-0 kubenswrapper[33013]: I0313 11:13:21.599204 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:21.599186 master-0 kubenswrapper[33013]: I0313 11:13:21.599214 33013 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:21.787921 master-0 kubenswrapper[33013]: I0313 11:13:21.786394 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:13:21.810717 master-0 kubenswrapper[33013]: I0313 11:13:21.810534 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-phs4s" Mar 13 11:13:21.810717 master-0 kubenswrapper[33013]: I0313 11:13:21.810610 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-phs4s" event={"ID":"df0c58fd-2f63-48b4-af91-0bbc1edcdcd5","Type":"ContainerDied","Data":"dfad83118483ce15e2f7a4762b9eaddbfada81956b988e396773f599a3d7d2f0"} Mar 13 11:13:21.810717 master-0 kubenswrapper[33013]: I0313 11:13:21.810652 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfad83118483ce15e2f7a4762b9eaddbfada81956b988e396773f599a3d7d2f0" Mar 13 11:13:21.815092 master-0 kubenswrapper[33013]: I0313 11:13:21.814368 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"0439b27d-bb04-467e-abc2-e155fa98d499","Type":"ContainerStarted","Data":"aef164fc86211ff35feb1d2510002aea9b1224d483a23b250c71ea66e2df3993"} Mar 13 11:13:21.815092 master-0 kubenswrapper[33013]: I0313 11:13:21.814558 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-87aa4-default-internal-api-0" podUID="0439b27d-bb04-467e-abc2-e155fa98d499" containerName="glance-log" containerID="cri-o://1df512348c45be76c8e38e724ffb95204f212fa1f3b12f02dc63cd9fb8ca1c43" gracePeriod=30 Mar 13 11:13:21.815092 master-0 kubenswrapper[33013]: I0313 11:13:21.814996 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-87aa4-default-internal-api-0" podUID="0439b27d-bb04-467e-abc2-e155fa98d499" containerName="glance-httpd" containerID="cri-o://aef164fc86211ff35feb1d2510002aea9b1224d483a23b250c71ea66e2df3993" gracePeriod=30 Mar 13 11:13:22.016171 master-0 kubenswrapper[33013]: I0313 11:13:22.013709 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-s6b9s"] Mar 13 11:13:22.016171 master-0 kubenswrapper[33013]: W0313 11:13:22.014483 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bf1cd3c_9327_4a27_aaee_20da3d6111f1.slice/crio-d7758f1b6f512832ca59a4a46af29729a606768e35e39602418ecd319d1b214b WatchSource:0}: Error finding container d7758f1b6f512832ca59a4a46af29729a606768e35e39602418ecd319d1b214b: Status 404 returned error can't find the container with id d7758f1b6f512832ca59a4a46af29729a606768e35e39602418ecd319d1b214b Mar 13 11:13:22.019679 master-0 kubenswrapper[33013]: I0313 11:13:22.019108 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-87aa4-default-internal-api-0" podStartSLOduration=7.019084067 podStartE2EDuration="7.019084067s" podCreationTimestamp="2026-03-13 11:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:21.989155933 +0000 UTC m=+985.465109282" watchObservedRunningTime="2026-03-13 11:13:22.019084067 +0000 UTC m=+985.495037416" Mar 13 11:13:22.040642 master-0 kubenswrapper[33013]: I0313 11:13:22.039837 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-trbrb"] Mar 13 11:13:22.040642 master-0 kubenswrapper[33013]: I0313 11:13:22.040118 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" podUID="b91d0010-b2bb-4203-99fe-500d30d7d691" containerName="dnsmasq-dns" containerID="cri-o://b764c2a493281f83a932ef04377d2e7ecfa9a28dc3f7c001b96916d5d7a36b01" gracePeriod=10 Mar 13 11:13:22.149779 master-0 kubenswrapper[33013]: I0313 11:13:22.149637 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-phs4s"] Mar 13 11:13:22.219612 master-0 kubenswrapper[33013]: I0313 11:13:22.215428 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-phs4s"] Mar 13 11:13:22.230637 master-0 kubenswrapper[33013]: I0313 11:13:22.226943 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vk2ws"] Mar 13 11:13:22.230637 master-0 kubenswrapper[33013]: E0313 11:13:22.227548 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0c58fd-2f63-48b4-af91-0bbc1edcdcd5" containerName="keystone-bootstrap" Mar 13 11:13:22.230637 master-0 kubenswrapper[33013]: I0313 11:13:22.227563 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0c58fd-2f63-48b4-af91-0bbc1edcdcd5" containerName="keystone-bootstrap" Mar 13 11:13:22.230637 master-0 kubenswrapper[33013]: I0313 11:13:22.227809 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0c58fd-2f63-48b4-af91-0bbc1edcdcd5" containerName="keystone-bootstrap" Mar 13 11:13:22.230637 master-0 kubenswrapper[33013]: I0313 11:13:22.228643 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.232240 master-0 kubenswrapper[33013]: I0313 11:13:22.232121 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 11:13:22.232612 master-0 kubenswrapper[33013]: I0313 11:13:22.232348 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 11:13:22.232612 master-0 kubenswrapper[33013]: I0313 11:13:22.232513 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 11:13:22.252437 master-0 kubenswrapper[33013]: I0313 11:13:22.252089 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vk2ws"] Mar 13 11:13:22.439982 master-0 kubenswrapper[33013]: I0313 11:13:22.439327 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-fernet-keys\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.439982 master-0 kubenswrapper[33013]: I0313 11:13:22.439543 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-combined-ca-bundle\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.439982 master-0 kubenswrapper[33013]: I0313 11:13:22.439667 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-scripts\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.440359 master-0 kubenswrapper[33013]: I0313 11:13:22.439993 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgc49\" (UniqueName: \"kubernetes.io/projected/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-kube-api-access-zgc49\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.440359 master-0 kubenswrapper[33013]: I0313 11:13:22.440067 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-config-data\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.440359 master-0 kubenswrapper[33013]: I0313 11:13:22.440297 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-credential-keys\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.542692 master-0 kubenswrapper[33013]: I0313 11:13:22.542553 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-config-data\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.542965 master-0 kubenswrapper[33013]: I0313 11:13:22.542708 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-credential-keys\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.542965 master-0 kubenswrapper[33013]: I0313 11:13:22.542807 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-fernet-keys\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.542965 master-0 kubenswrapper[33013]: I0313 11:13:22.542848 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-combined-ca-bundle\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.542965 master-0 kubenswrapper[33013]: I0313 11:13:22.542932 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-scripts\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.543161 master-0 kubenswrapper[33013]: I0313 11:13:22.543019 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgc49\" (UniqueName: \"kubernetes.io/projected/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-kube-api-access-zgc49\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.546984 master-0 kubenswrapper[33013]: I0313 11:13:22.546888 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-combined-ca-bundle\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.547158 master-0 kubenswrapper[33013]: I0313 11:13:22.547113 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-config-data\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.547272 master-0 kubenswrapper[33013]: I0313 11:13:22.547238 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-scripts\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.549300 master-0 kubenswrapper[33013]: I0313 11:13:22.549259 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-credential-keys\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.549450 master-0 kubenswrapper[33013]: I0313 11:13:22.549414 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-fernet-keys\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.727681 master-0 kubenswrapper[33013]: I0313 11:13:22.727056 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgc49\" (UniqueName: \"kubernetes.io/projected/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-kube-api-access-zgc49\") pod \"keystone-bootstrap-vk2ws\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.737606 master-0 kubenswrapper[33013]: I0313 11:13:22.735441 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14de46f4-84c3-4a39-ad37-3c8d3486a657" path="/var/lib/kubelet/pods/14de46f4-84c3-4a39-ad37-3c8d3486a657/volumes" Mar 13 11:13:22.737606 master-0 kubenswrapper[33013]: I0313 11:13:22.735897 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0c58fd-2f63-48b4-af91-0bbc1edcdcd5" path="/var/lib/kubelet/pods/df0c58fd-2f63-48b4-af91-0bbc1edcdcd5/volumes" Mar 13 11:13:22.893609 master-0 kubenswrapper[33013]: I0313 11:13:22.880917 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-s6b9s" event={"ID":"1bf1cd3c-9327-4a27-aaee-20da3d6111f1","Type":"ContainerStarted","Data":"d7758f1b6f512832ca59a4a46af29729a606768e35e39602418ecd319d1b214b"} Mar 13 11:13:22.893609 master-0 kubenswrapper[33013]: I0313 11:13:22.882041 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:22.946607 master-0 kubenswrapper[33013]: I0313 11:13:22.938352 33013 generic.go:334] "Generic (PLEG): container finished" podID="b91d0010-b2bb-4203-99fe-500d30d7d691" containerID="b764c2a493281f83a932ef04377d2e7ecfa9a28dc3f7c001b96916d5d7a36b01" exitCode=0 Mar 13 11:13:22.946607 master-0 kubenswrapper[33013]: I0313 11:13:22.938435 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" event={"ID":"b91d0010-b2bb-4203-99fe-500d30d7d691","Type":"ContainerDied","Data":"b764c2a493281f83a932ef04377d2e7ecfa9a28dc3f7c001b96916d5d7a36b01"} Mar 13 11:13:22.988686 master-0 kubenswrapper[33013]: I0313 11:13:22.974004 33013 generic.go:334] "Generic (PLEG): container finished" podID="0439b27d-bb04-467e-abc2-e155fa98d499" containerID="aef164fc86211ff35feb1d2510002aea9b1224d483a23b250c71ea66e2df3993" exitCode=0 Mar 13 11:13:22.988686 master-0 kubenswrapper[33013]: I0313 11:13:22.974050 33013 generic.go:334] "Generic (PLEG): container finished" podID="0439b27d-bb04-467e-abc2-e155fa98d499" containerID="1df512348c45be76c8e38e724ffb95204f212fa1f3b12f02dc63cd9fb8ca1c43" exitCode=143 Mar 13 11:13:22.988686 master-0 kubenswrapper[33013]: I0313 11:13:22.974091 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"0439b27d-bb04-467e-abc2-e155fa98d499","Type":"ContainerDied","Data":"aef164fc86211ff35feb1d2510002aea9b1224d483a23b250c71ea66e2df3993"} Mar 13 11:13:22.988686 master-0 kubenswrapper[33013]: I0313 11:13:22.974120 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"0439b27d-bb04-467e-abc2-e155fa98d499","Type":"ContainerDied","Data":"1df512348c45be76c8e38e724ffb95204f212fa1f3b12f02dc63cd9fb8ca1c43"} Mar 13 11:13:23.100133 master-0 kubenswrapper[33013]: I0313 11:13:23.100089 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:13:23.174563 master-0 kubenswrapper[33013]: I0313 11:13:23.173530 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-ovsdbserver-sb\") pod \"b91d0010-b2bb-4203-99fe-500d30d7d691\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " Mar 13 11:13:23.174563 master-0 kubenswrapper[33013]: I0313 11:13:23.173640 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-ovsdbserver-nb\") pod \"b91d0010-b2bb-4203-99fe-500d30d7d691\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " Mar 13 11:13:23.174563 master-0 kubenswrapper[33013]: I0313 11:13:23.173739 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2znkm\" (UniqueName: \"kubernetes.io/projected/b91d0010-b2bb-4203-99fe-500d30d7d691-kube-api-access-2znkm\") pod \"b91d0010-b2bb-4203-99fe-500d30d7d691\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " Mar 13 11:13:23.174563 master-0 kubenswrapper[33013]: I0313 11:13:23.173803 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-config\") pod \"b91d0010-b2bb-4203-99fe-500d30d7d691\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " Mar 13 11:13:23.174563 master-0 kubenswrapper[33013]: I0313 11:13:23.173846 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-dns-svc\") pod \"b91d0010-b2bb-4203-99fe-500d30d7d691\" (UID: \"b91d0010-b2bb-4203-99fe-500d30d7d691\") " Mar 13 11:13:23.229073 master-0 kubenswrapper[33013]: I0313 11:13:23.228980 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b91d0010-b2bb-4203-99fe-500d30d7d691-kube-api-access-2znkm" (OuterVolumeSpecName: "kube-api-access-2znkm") pod "b91d0010-b2bb-4203-99fe-500d30d7d691" (UID: "b91d0010-b2bb-4203-99fe-500d30d7d691"). InnerVolumeSpecName "kube-api-access-2znkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:23.254531 master-0 kubenswrapper[33013]: I0313 11:13:23.254460 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-config" (OuterVolumeSpecName: "config") pod "b91d0010-b2bb-4203-99fe-500d30d7d691" (UID: "b91d0010-b2bb-4203-99fe-500d30d7d691"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:23.260337 master-0 kubenswrapper[33013]: I0313 11:13:23.260283 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b91d0010-b2bb-4203-99fe-500d30d7d691" (UID: "b91d0010-b2bb-4203-99fe-500d30d7d691"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:23.267359 master-0 kubenswrapper[33013]: I0313 11:13:23.267295 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b91d0010-b2bb-4203-99fe-500d30d7d691" (UID: "b91d0010-b2bb-4203-99fe-500d30d7d691"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:23.279035 master-0 kubenswrapper[33013]: I0313 11:13:23.278937 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2znkm\" (UniqueName: \"kubernetes.io/projected/b91d0010-b2bb-4203-99fe-500d30d7d691-kube-api-access-2znkm\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:23.279035 master-0 kubenswrapper[33013]: I0313 11:13:23.279004 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:23.279035 master-0 kubenswrapper[33013]: I0313 11:13:23.279017 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:23.279035 master-0 kubenswrapper[33013]: I0313 11:13:23.279027 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:23.286226 master-0 kubenswrapper[33013]: I0313 11:13:23.286173 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b91d0010-b2bb-4203-99fe-500d30d7d691" (UID: "b91d0010-b2bb-4203-99fe-500d30d7d691"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:23.358732 master-0 kubenswrapper[33013]: I0313 11:13:23.357738 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:23.384159 master-0 kubenswrapper[33013]: I0313 11:13:23.383934 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"0439b27d-bb04-467e-abc2-e155fa98d499\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " Mar 13 11:13:23.384159 master-0 kubenswrapper[33013]: I0313 11:13:23.383981 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-combined-ca-bundle\") pod \"0439b27d-bb04-467e-abc2-e155fa98d499\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " Mar 13 11:13:23.384159 master-0 kubenswrapper[33013]: I0313 11:13:23.384118 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-config-data\") pod \"0439b27d-bb04-467e-abc2-e155fa98d499\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " Mar 13 11:13:23.384159 master-0 kubenswrapper[33013]: I0313 11:13:23.384178 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0439b27d-bb04-467e-abc2-e155fa98d499-httpd-run\") pod \"0439b27d-bb04-467e-abc2-e155fa98d499\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " Mar 13 11:13:23.387660 master-0 kubenswrapper[33013]: I0313 11:13:23.384662 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b91d0010-b2bb-4203-99fe-500d30d7d691-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:23.387660 master-0 kubenswrapper[33013]: I0313 11:13:23.385137 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0439b27d-bb04-467e-abc2-e155fa98d499-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0439b27d-bb04-467e-abc2-e155fa98d499" (UID: "0439b27d-bb04-467e-abc2-e155fa98d499"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:13:23.414069 master-0 kubenswrapper[33013]: I0313 11:13:23.413999 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0439b27d-bb04-467e-abc2-e155fa98d499" (UID: "0439b27d-bb04-467e-abc2-e155fa98d499"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:23.441545 master-0 kubenswrapper[33013]: I0313 11:13:23.441479 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-config-data" (OuterVolumeSpecName: "config-data") pod "0439b27d-bb04-467e-abc2-e155fa98d499" (UID: "0439b27d-bb04-467e-abc2-e155fa98d499"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:23.488960 master-0 kubenswrapper[33013]: I0313 11:13:23.487571 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nktlp\" (UniqueName: \"kubernetes.io/projected/0439b27d-bb04-467e-abc2-e155fa98d499-kube-api-access-nktlp\") pod \"0439b27d-bb04-467e-abc2-e155fa98d499\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " Mar 13 11:13:23.488960 master-0 kubenswrapper[33013]: I0313 11:13:23.487649 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-scripts\") pod \"0439b27d-bb04-467e-abc2-e155fa98d499\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " Mar 13 11:13:23.488960 master-0 kubenswrapper[33013]: I0313 11:13:23.487747 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0439b27d-bb04-467e-abc2-e155fa98d499-logs\") pod \"0439b27d-bb04-467e-abc2-e155fa98d499\" (UID: \"0439b27d-bb04-467e-abc2-e155fa98d499\") " Mar 13 11:13:23.488960 master-0 kubenswrapper[33013]: I0313 11:13:23.488662 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:23.488960 master-0 kubenswrapper[33013]: I0313 11:13:23.488685 33013 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0439b27d-bb04-467e-abc2-e155fa98d499-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:23.488960 master-0 kubenswrapper[33013]: I0313 11:13:23.488698 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:23.489274 master-0 kubenswrapper[33013]: I0313 11:13:23.489078 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0439b27d-bb04-467e-abc2-e155fa98d499-logs" (OuterVolumeSpecName: "logs") pod "0439b27d-bb04-467e-abc2-e155fa98d499" (UID: "0439b27d-bb04-467e-abc2-e155fa98d499"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:13:23.492053 master-0 kubenswrapper[33013]: I0313 11:13:23.491242 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0439b27d-bb04-467e-abc2-e155fa98d499-kube-api-access-nktlp" (OuterVolumeSpecName: "kube-api-access-nktlp") pod "0439b27d-bb04-467e-abc2-e155fa98d499" (UID: "0439b27d-bb04-467e-abc2-e155fa98d499"). InnerVolumeSpecName "kube-api-access-nktlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:23.492387 master-0 kubenswrapper[33013]: I0313 11:13:23.492349 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-scripts" (OuterVolumeSpecName: "scripts") pod "0439b27d-bb04-467e-abc2-e155fa98d499" (UID: "0439b27d-bb04-467e-abc2-e155fa98d499"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:23.606739 master-0 kubenswrapper[33013]: I0313 11:13:23.606680 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nktlp\" (UniqueName: \"kubernetes.io/projected/0439b27d-bb04-467e-abc2-e155fa98d499-kube-api-access-nktlp\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:23.606739 master-0 kubenswrapper[33013]: I0313 11:13:23.606734 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0439b27d-bb04-467e-abc2-e155fa98d499-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:23.606739 master-0 kubenswrapper[33013]: I0313 11:13:23.606743 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0439b27d-bb04-467e-abc2-e155fa98d499-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:23.633626 master-0 kubenswrapper[33013]: I0313 11:13:23.631257 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vk2ws"] Mar 13 11:13:23.987975 master-0 kubenswrapper[33013]: I0313 11:13:23.987756 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" event={"ID":"b91d0010-b2bb-4203-99fe-500d30d7d691","Type":"ContainerDied","Data":"71337851cfa8193c9d3a6cefc842a19ca871bbefa1752e82ee2c652e584be2d0"} Mar 13 11:13:23.987975 master-0 kubenswrapper[33013]: I0313 11:13:23.987834 33013 scope.go:117] "RemoveContainer" containerID="b764c2a493281f83a932ef04377d2e7ecfa9a28dc3f7c001b96916d5d7a36b01" Mar 13 11:13:23.987975 master-0 kubenswrapper[33013]: I0313 11:13:23.987786 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-trbrb" Mar 13 11:13:23.993049 master-0 kubenswrapper[33013]: I0313 11:13:23.992406 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"0439b27d-bb04-467e-abc2-e155fa98d499","Type":"ContainerDied","Data":"91ffdf8b2e31be8ef71ccf194e77959524ed1417c478358dc64d0407338baf7f"} Mar 13 11:13:23.993049 master-0 kubenswrapper[33013]: I0313 11:13:23.992465 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:23.997069 master-0 kubenswrapper[33013]: I0313 11:13:23.996101 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vk2ws" event={"ID":"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4","Type":"ContainerStarted","Data":"45a62d89a563106079143e5cb8c9cd86bc0be50dbed03f5991074b7901162a7f"} Mar 13 11:13:23.997069 master-0 kubenswrapper[33013]: I0313 11:13:23.996175 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vk2ws" event={"ID":"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4","Type":"ContainerStarted","Data":"6538170db7af82765e49c8c9b06c3cd440561ca7fb6cd01fbd7702b3c576671c"} Mar 13 11:13:24.027259 master-0 kubenswrapper[33013]: I0313 11:13:24.027087 33013 scope.go:117] "RemoveContainer" containerID="21f3dbe45876a3770a03afe16362eac9fa016be600bd32ecf37f0429082348de" Mar 13 11:13:24.037659 master-0 kubenswrapper[33013]: I0313 11:13:24.037349 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vk2ws" podStartSLOduration=2.037324161 podStartE2EDuration="2.037324161s" podCreationTimestamp="2026-03-13 11:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:24.026788614 +0000 UTC m=+987.502741973" watchObservedRunningTime="2026-03-13 11:13:24.037324161 +0000 UTC m=+987.513277510" Mar 13 11:13:24.062270 master-0 kubenswrapper[33013]: I0313 11:13:24.062190 33013 scope.go:117] "RemoveContainer" containerID="aef164fc86211ff35feb1d2510002aea9b1224d483a23b250c71ea66e2df3993" Mar 13 11:13:24.070292 master-0 kubenswrapper[33013]: I0313 11:13:24.069804 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-trbrb"] Mar 13 11:13:24.080391 master-0 kubenswrapper[33013]: I0313 11:13:24.080295 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-trbrb"] Mar 13 11:13:24.097552 master-0 kubenswrapper[33013]: I0313 11:13:24.097507 33013 scope.go:117] "RemoveContainer" containerID="1df512348c45be76c8e38e724ffb95204f212fa1f3b12f02dc63cd9fb8ca1c43" Mar 13 11:13:24.808452 master-0 kubenswrapper[33013]: I0313 11:13:24.808395 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b91d0010-b2bb-4203-99fe-500d30d7d691" path="/var/lib/kubelet/pods/b91d0010-b2bb-4203-99fe-500d30d7d691/volumes" Mar 13 11:13:24.841730 master-0 kubenswrapper[33013]: I0313 11:13:24.841517 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c" (OuterVolumeSpecName: "glance") pod "0439b27d-bb04-467e-abc2-e155fa98d499" (UID: "0439b27d-bb04-467e-abc2-e155fa98d499"). InnerVolumeSpecName "pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 11:13:24.844304 master-0 kubenswrapper[33013]: I0313 11:13:24.844266 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:24.971020 master-0 kubenswrapper[33013]: I0313 11:13:24.970930 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:24.972219 master-0 kubenswrapper[33013]: I0313 11:13:24.972159 33013 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") on node \"master-0\" " Mar 13 11:13:24.998567 master-0 kubenswrapper[33013]: I0313 11:13:24.998537 33013 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 13 11:13:24.999249 master-0 kubenswrapper[33013]: I0313 11:13:24.999232 33013 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96" (UniqueName: "kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c") on node "master-0" Mar 13 11:13:25.075452 master-0 kubenswrapper[33013]: I0313 11:13:25.075415 33013 reconciler_common.go:293] "Volume detached for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:25.380681 master-0 kubenswrapper[33013]: I0313 11:13:25.380522 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:13:25.406930 master-0 kubenswrapper[33013]: I0313 11:13:25.406851 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:13:25.423152 master-0 kubenswrapper[33013]: I0313 11:13:25.423083 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:13:25.425063 master-0 kubenswrapper[33013]: E0313 11:13:25.423706 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b91d0010-b2bb-4203-99fe-500d30d7d691" containerName="dnsmasq-dns" Mar 13 11:13:25.425063 master-0 kubenswrapper[33013]: I0313 11:13:25.423732 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="b91d0010-b2bb-4203-99fe-500d30d7d691" containerName="dnsmasq-dns" Mar 13 11:13:25.425063 master-0 kubenswrapper[33013]: E0313 11:13:25.423749 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0439b27d-bb04-467e-abc2-e155fa98d499" containerName="glance-httpd" Mar 13 11:13:25.425063 master-0 kubenswrapper[33013]: I0313 11:13:25.423756 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="0439b27d-bb04-467e-abc2-e155fa98d499" containerName="glance-httpd" Mar 13 11:13:25.425063 master-0 kubenswrapper[33013]: E0313 11:13:25.423797 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b91d0010-b2bb-4203-99fe-500d30d7d691" containerName="init" Mar 13 11:13:25.425063 master-0 kubenswrapper[33013]: I0313 11:13:25.423805 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="b91d0010-b2bb-4203-99fe-500d30d7d691" containerName="init" Mar 13 11:13:25.425063 master-0 kubenswrapper[33013]: E0313 11:13:25.423820 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0439b27d-bb04-467e-abc2-e155fa98d499" containerName="glance-log" Mar 13 11:13:25.425063 master-0 kubenswrapper[33013]: I0313 11:13:25.423826 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="0439b27d-bb04-467e-abc2-e155fa98d499" containerName="glance-log" Mar 13 11:13:25.425063 master-0 kubenswrapper[33013]: I0313 11:13:25.424055 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="0439b27d-bb04-467e-abc2-e155fa98d499" containerName="glance-log" Mar 13 11:13:25.425063 master-0 kubenswrapper[33013]: I0313 11:13:25.424100 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="b91d0010-b2bb-4203-99fe-500d30d7d691" containerName="dnsmasq-dns" Mar 13 11:13:25.425063 master-0 kubenswrapper[33013]: I0313 11:13:25.424114 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="0439b27d-bb04-467e-abc2-e155fa98d499" containerName="glance-httpd" Mar 13 11:13:25.425426 master-0 kubenswrapper[33013]: I0313 11:13:25.425187 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.429763 master-0 kubenswrapper[33013]: I0313 11:13:25.428016 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 13 11:13:25.429763 master-0 kubenswrapper[33013]: I0313 11:13:25.428392 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-87aa4-default-internal-config-data" Mar 13 11:13:25.438932 master-0 kubenswrapper[33013]: I0313 11:13:25.438860 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:13:25.485113 master-0 kubenswrapper[33013]: I0313 11:13:25.485033 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bed3cf98-6b1c-4fb8-b082-57025157fab4-logs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.485372 master-0 kubenswrapper[33013]: I0313 11:13:25.485125 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-scripts\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.485372 master-0 kubenswrapper[33013]: I0313 11:13:25.485184 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.485372 master-0 kubenswrapper[33013]: I0313 11:13:25.485210 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-config-data\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.485741 master-0 kubenswrapper[33013]: I0313 11:13:25.485510 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bed3cf98-6b1c-4fb8-b082-57025157fab4-httpd-run\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.485741 master-0 kubenswrapper[33013]: I0313 11:13:25.485576 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-internal-tls-certs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.485741 master-0 kubenswrapper[33013]: I0313 11:13:25.485632 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5xfh\" (UniqueName: \"kubernetes.io/projected/bed3cf98-6b1c-4fb8-b082-57025157fab4-kube-api-access-p5xfh\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.485844 master-0 kubenswrapper[33013]: I0313 11:13:25.485815 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-combined-ca-bundle\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.588684 master-0 kubenswrapper[33013]: I0313 11:13:25.587864 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-internal-tls-certs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.588684 master-0 kubenswrapper[33013]: I0313 11:13:25.588680 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5xfh\" (UniqueName: \"kubernetes.io/projected/bed3cf98-6b1c-4fb8-b082-57025157fab4-kube-api-access-p5xfh\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.589221 master-0 kubenswrapper[33013]: I0313 11:13:25.588766 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-combined-ca-bundle\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.589221 master-0 kubenswrapper[33013]: I0313 11:13:25.588894 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bed3cf98-6b1c-4fb8-b082-57025157fab4-logs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.589221 master-0 kubenswrapper[33013]: I0313 11:13:25.588929 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-scripts\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.589221 master-0 kubenswrapper[33013]: I0313 11:13:25.588977 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.589221 master-0 kubenswrapper[33013]: I0313 11:13:25.589000 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-config-data\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.589221 master-0 kubenswrapper[33013]: I0313 11:13:25.589087 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bed3cf98-6b1c-4fb8-b082-57025157fab4-httpd-run\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.589646 master-0 kubenswrapper[33013]: I0313 11:13:25.589474 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bed3cf98-6b1c-4fb8-b082-57025157fab4-httpd-run\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.590366 master-0 kubenswrapper[33013]: I0313 11:13:25.590334 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bed3cf98-6b1c-4fb8-b082-57025157fab4-logs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.591495 master-0 kubenswrapper[33013]: I0313 11:13:25.591459 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-internal-tls-certs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.594088 master-0 kubenswrapper[33013]: I0313 11:13:25.594055 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-combined-ca-bundle\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.595660 master-0 kubenswrapper[33013]: I0313 11:13:25.595597 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-config-data\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.603085 master-0 kubenswrapper[33013]: I0313 11:13:25.603014 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-scripts\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:25.893386 master-0 kubenswrapper[33013]: I0313 11:13:25.893339 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:13:25.893386 master-0 kubenswrapper[33013]: I0313 11:13:25.893392 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/946cdf3189fcbc367fb7e7cfd5e4aad164d151a73965b6f865a738752ef6bb2a/globalmount\"" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:27.105309 master-0 kubenswrapper[33013]: I0313 11:13:27.095625 33013 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-tkcjf" podUID="65e88938-e6c6-4e21-8088-6eddb31f58fc" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:13:27.105309 master-0 kubenswrapper[33013]: I0313 11:13:27.097610 33013 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-tkcjf" podUID="65e88938-e6c6-4e21-8088-6eddb31f58fc" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 13 11:13:27.163819 master-0 kubenswrapper[33013]: I0313 11:13:27.146130 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0439b27d-bb04-467e-abc2-e155fa98d499" path="/var/lib/kubelet/pods/0439b27d-bb04-467e-abc2-e155fa98d499/volumes" Mar 13 11:13:27.277159 master-0 kubenswrapper[33013]: I0313 11:13:27.277080 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5xfh\" (UniqueName: \"kubernetes.io/projected/bed3cf98-6b1c-4fb8-b082-57025157fab4-kube-api-access-p5xfh\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:28.218037 master-0 kubenswrapper[33013]: I0313 11:13:28.217908 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:13:28.295992 master-0 kubenswrapper[33013]: I0313 11:13:28.295915 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:28.459905 master-0 kubenswrapper[33013]: I0313 11:13:28.457226 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:29.168903 master-0 kubenswrapper[33013]: I0313 11:13:29.168848 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:13:29.194758 master-0 kubenswrapper[33013]: I0313 11:13:29.194710 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-external-api-0" event={"ID":"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35","Type":"ContainerStarted","Data":"d717f42eba136a46bb6903f61408a7fde2c0f5af3f7453c8b44fc0dcd8c3c84b"} Mar 13 11:13:29.195484 master-0 kubenswrapper[33013]: I0313 11:13:29.195287 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-external-api-0" event={"ID":"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35","Type":"ContainerStarted","Data":"7a7d3485699b3a1c2dc0617cd985bdbbdbc655b9f386d4d1004429d1bab7b5a2"} Mar 13 11:13:37.328687 master-0 kubenswrapper[33013]: I0313 11:13:37.328621 33013 generic.go:334] "Generic (PLEG): container finished" podID="240bf9bb-a4f9-4b00-9f3b-da8db52d618a" containerID="c6724595de114d87b0a2ca3cb8008804b7047c715a2bb80212ddff89938bc3c2" exitCode=0 Mar 13 11:13:37.329447 master-0 kubenswrapper[33013]: I0313 11:13:37.328736 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2j64b" event={"ID":"240bf9bb-a4f9-4b00-9f3b-da8db52d618a","Type":"ContainerDied","Data":"c6724595de114d87b0a2ca3cb8008804b7047c715a2bb80212ddff89938bc3c2"} Mar 13 11:13:37.331641 master-0 kubenswrapper[33013]: I0313 11:13:37.331533 33013 generic.go:334] "Generic (PLEG): container finished" podID="9b7b9da5-17ec-4243-8c2f-d4039d9b63f4" containerID="45a62d89a563106079143e5cb8c9cd86bc0be50dbed03f5991074b7901162a7f" exitCode=0 Mar 13 11:13:37.331641 master-0 kubenswrapper[33013]: I0313 11:13:37.331627 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vk2ws" event={"ID":"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4","Type":"ContainerDied","Data":"45a62d89a563106079143e5cb8c9cd86bc0be50dbed03f5991074b7901162a7f"} Mar 13 11:13:42.387327 master-0 kubenswrapper[33013]: I0313 11:13:42.387273 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"bed3cf98-6b1c-4fb8-b082-57025157fab4","Type":"ContainerStarted","Data":"f360c2612e12204b490419ca34c3e5bf63c37fe893fa79436f38490561d2739b"} Mar 13 11:13:43.111722 master-0 kubenswrapper[33013]: I0313 11:13:43.111607 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:43.153314 master-0 kubenswrapper[33013]: I0313 11:13:43.153275 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:43.214308 master-0 kubenswrapper[33013]: I0313 11:13:43.214198 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-fernet-keys\") pod \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " Mar 13 11:13:43.214308 master-0 kubenswrapper[33013]: I0313 11:13:43.214294 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-scripts\") pod \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " Mar 13 11:13:43.214726 master-0 kubenswrapper[33013]: I0313 11:13:43.214695 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-combined-ca-bundle\") pod \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " Mar 13 11:13:43.214810 master-0 kubenswrapper[33013]: I0313 11:13:43.214772 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-config-data\") pod \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " Mar 13 11:13:43.214810 master-0 kubenswrapper[33013]: I0313 11:13:43.214805 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-credential-keys\") pod \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " Mar 13 11:13:43.215043 master-0 kubenswrapper[33013]: I0313 11:13:43.214829 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgc49\" (UniqueName: \"kubernetes.io/projected/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-kube-api-access-zgc49\") pod \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " Mar 13 11:13:43.237952 master-0 kubenswrapper[33013]: I0313 11:13:43.237863 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-kube-api-access-zgc49" (OuterVolumeSpecName: "kube-api-access-zgc49") pod "9b7b9da5-17ec-4243-8c2f-d4039d9b63f4" (UID: "9b7b9da5-17ec-4243-8c2f-d4039d9b63f4"). InnerVolumeSpecName "kube-api-access-zgc49". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:43.238416 master-0 kubenswrapper[33013]: I0313 11:13:43.238387 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9b7b9da5-17ec-4243-8c2f-d4039d9b63f4" (UID: "9b7b9da5-17ec-4243-8c2f-d4039d9b63f4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:43.240276 master-0 kubenswrapper[33013]: I0313 11:13:43.240253 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9b7b9da5-17ec-4243-8c2f-d4039d9b63f4" (UID: "9b7b9da5-17ec-4243-8c2f-d4039d9b63f4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:43.287015 master-0 kubenswrapper[33013]: I0313 11:13:43.286946 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-scripts" (OuterVolumeSpecName: "scripts") pod "9b7b9da5-17ec-4243-8c2f-d4039d9b63f4" (UID: "9b7b9da5-17ec-4243-8c2f-d4039d9b63f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:43.316054 master-0 kubenswrapper[33013]: I0313 11:13:43.315212 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-config-data" (OuterVolumeSpecName: "config-data") pod "9b7b9da5-17ec-4243-8c2f-d4039d9b63f4" (UID: "9b7b9da5-17ec-4243-8c2f-d4039d9b63f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:43.320732 master-0 kubenswrapper[33013]: I0313 11:13:43.319332 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b7b9da5-17ec-4243-8c2f-d4039d9b63f4" (UID: "9b7b9da5-17ec-4243-8c2f-d4039d9b63f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:43.322908 master-0 kubenswrapper[33013]: I0313 11:13:43.322849 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-combined-ca-bundle\") pod \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " Mar 13 11:13:43.323896 master-0 kubenswrapper[33013]: I0313 11:13:43.323009 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-logs\") pod \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " Mar 13 11:13:43.323896 master-0 kubenswrapper[33013]: I0313 11:13:43.323186 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-scripts\") pod \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " Mar 13 11:13:43.323896 master-0 kubenswrapper[33013]: I0313 11:13:43.323252 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-config-data\") pod \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " Mar 13 11:13:43.323896 master-0 kubenswrapper[33013]: I0313 11:13:43.323293 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgxrl\" (UniqueName: \"kubernetes.io/projected/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-kube-api-access-rgxrl\") pod \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\" (UID: \"240bf9bb-a4f9-4b00-9f3b-da8db52d618a\") " Mar 13 11:13:43.323896 master-0 kubenswrapper[33013]: I0313 11:13:43.323330 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-combined-ca-bundle\") pod \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\" (UID: \"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4\") " Mar 13 11:13:43.323896 master-0 kubenswrapper[33013]: W0313 11:13:43.323687 33013 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4/volumes/kubernetes.io~secret/combined-ca-bundle Mar 13 11:13:43.323896 master-0 kubenswrapper[33013]: I0313 11:13:43.323702 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b7b9da5-17ec-4243-8c2f-d4039d9b63f4" (UID: "9b7b9da5-17ec-4243-8c2f-d4039d9b63f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:43.325129 master-0 kubenswrapper[33013]: I0313 11:13:43.325076 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:43.325129 master-0 kubenswrapper[33013]: I0313 11:13:43.325132 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:43.325239 master-0 kubenswrapper[33013]: I0313 11:13:43.325151 33013 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:43.325239 master-0 kubenswrapper[33013]: I0313 11:13:43.325165 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgc49\" (UniqueName: \"kubernetes.io/projected/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-kube-api-access-zgc49\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:43.325239 master-0 kubenswrapper[33013]: I0313 11:13:43.325176 33013 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:43.325239 master-0 kubenswrapper[33013]: I0313 11:13:43.325187 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7b9da5-17ec-4243-8c2f-d4039d9b63f4-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:43.326976 master-0 kubenswrapper[33013]: I0313 11:13:43.326936 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-logs" (OuterVolumeSpecName: "logs") pod "240bf9bb-a4f9-4b00-9f3b-da8db52d618a" (UID: "240bf9bb-a4f9-4b00-9f3b-da8db52d618a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:13:43.335373 master-0 kubenswrapper[33013]: I0313 11:13:43.335311 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-scripts" (OuterVolumeSpecName: "scripts") pod "240bf9bb-a4f9-4b00-9f3b-da8db52d618a" (UID: "240bf9bb-a4f9-4b00-9f3b-da8db52d618a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:43.335979 master-0 kubenswrapper[33013]: I0313 11:13:43.335941 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-kube-api-access-rgxrl" (OuterVolumeSpecName: "kube-api-access-rgxrl") pod "240bf9bb-a4f9-4b00-9f3b-da8db52d618a" (UID: "240bf9bb-a4f9-4b00-9f3b-da8db52d618a"). InnerVolumeSpecName "kube-api-access-rgxrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:43.361400 master-0 kubenswrapper[33013]: I0313 11:13:43.361326 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "240bf9bb-a4f9-4b00-9f3b-da8db52d618a" (UID: "240bf9bb-a4f9-4b00-9f3b-da8db52d618a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:43.366781 master-0 kubenswrapper[33013]: I0313 11:13:43.366611 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-config-data" (OuterVolumeSpecName: "config-data") pod "240bf9bb-a4f9-4b00-9f3b-da8db52d618a" (UID: "240bf9bb-a4f9-4b00-9f3b-da8db52d618a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:43.423556 master-0 kubenswrapper[33013]: I0313 11:13:43.423454 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2j64b" event={"ID":"240bf9bb-a4f9-4b00-9f3b-da8db52d618a","Type":"ContainerDied","Data":"bd6db7b879612222695369aed4f4abc9b2885c1070bfaf86b1e36b51d7b3bdef"} Mar 13 11:13:43.424440 master-0 kubenswrapper[33013]: I0313 11:13:43.423603 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6db7b879612222695369aed4f4abc9b2885c1070bfaf86b1e36b51d7b3bdef" Mar 13 11:13:43.424440 master-0 kubenswrapper[33013]: I0313 11:13:43.423653 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2j64b" Mar 13 11:13:43.427545 master-0 kubenswrapper[33013]: I0313 11:13:43.427444 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vk2ws" event={"ID":"9b7b9da5-17ec-4243-8c2f-d4039d9b63f4","Type":"ContainerDied","Data":"6538170db7af82765e49c8c9b06c3cd440561ca7fb6cd01fbd7702b3c576671c"} Mar 13 11:13:43.427545 master-0 kubenswrapper[33013]: I0313 11:13:43.427525 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vk2ws" Mar 13 11:13:43.427738 master-0 kubenswrapper[33013]: I0313 11:13:43.427552 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6538170db7af82765e49c8c9b06c3cd440561ca7fb6cd01fbd7702b3c576671c" Mar 13 11:13:43.430808 master-0 kubenswrapper[33013]: I0313 11:13:43.430768 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:43.430808 master-0 kubenswrapper[33013]: I0313 11:13:43.430808 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:43.430936 master-0 kubenswrapper[33013]: I0313 11:13:43.430822 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgxrl\" (UniqueName: \"kubernetes.io/projected/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-kube-api-access-rgxrl\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:43.430936 master-0 kubenswrapper[33013]: I0313 11:13:43.430834 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:43.430936 master-0 kubenswrapper[33013]: I0313 11:13:43.430842 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/240bf9bb-a4f9-4b00-9f3b-da8db52d618a-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:43.433450 master-0 kubenswrapper[33013]: I0313 11:13:43.433396 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-s6b9s" event={"ID":"1bf1cd3c-9327-4a27-aaee-20da3d6111f1","Type":"ContainerStarted","Data":"df8b7aecca1c43c5c971e112ad6f7b143e63fdb896cde4a0afd1504c5ec45448"} Mar 13 11:13:44.294707 master-0 kubenswrapper[33013]: I0313 11:13:44.294640 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-56f94cc46b-p5gzb"] Mar 13 11:13:44.295193 master-0 kubenswrapper[33013]: E0313 11:13:44.295164 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7b9da5-17ec-4243-8c2f-d4039d9b63f4" containerName="keystone-bootstrap" Mar 13 11:13:44.295193 master-0 kubenswrapper[33013]: I0313 11:13:44.295185 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7b9da5-17ec-4243-8c2f-d4039d9b63f4" containerName="keystone-bootstrap" Mar 13 11:13:44.295269 master-0 kubenswrapper[33013]: E0313 11:13:44.295205 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240bf9bb-a4f9-4b00-9f3b-da8db52d618a" containerName="placement-db-sync" Mar 13 11:13:44.295269 master-0 kubenswrapper[33013]: I0313 11:13:44.295213 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="240bf9bb-a4f9-4b00-9f3b-da8db52d618a" containerName="placement-db-sync" Mar 13 11:13:44.295522 master-0 kubenswrapper[33013]: I0313 11:13:44.295495 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="240bf9bb-a4f9-4b00-9f3b-da8db52d618a" containerName="placement-db-sync" Mar 13 11:13:44.295574 master-0 kubenswrapper[33013]: I0313 11:13:44.295529 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7b9da5-17ec-4243-8c2f-d4039d9b63f4" containerName="keystone-bootstrap" Mar 13 11:13:44.296406 master-0 kubenswrapper[33013]: I0313 11:13:44.296364 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.302052 master-0 kubenswrapper[33013]: I0313 11:13:44.301831 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 13 11:13:44.302052 master-0 kubenswrapper[33013]: I0313 11:13:44.301844 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 13 11:13:44.304565 master-0 kubenswrapper[33013]: I0313 11:13:44.304463 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 13 11:13:44.319020 master-0 kubenswrapper[33013]: I0313 11:13:44.316876 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56f94cc46b-p5gzb"] Mar 13 11:13:44.319020 master-0 kubenswrapper[33013]: I0313 11:13:44.318155 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 13 11:13:44.319020 master-0 kubenswrapper[33013]: I0313 11:13:44.318953 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 13 11:13:44.359724 master-0 kubenswrapper[33013]: I0313 11:13:44.354724 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7j6z\" (UniqueName: \"kubernetes.io/projected/f1043f47-656a-4f62-af9e-ca7c7562f2bb-kube-api-access-c7j6z\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.359724 master-0 kubenswrapper[33013]: I0313 11:13:44.354836 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-internal-tls-certs\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.359724 master-0 kubenswrapper[33013]: I0313 11:13:44.354872 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-credential-keys\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.359724 master-0 kubenswrapper[33013]: I0313 11:13:44.354900 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-scripts\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.359724 master-0 kubenswrapper[33013]: I0313 11:13:44.354923 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-public-tls-certs\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.359724 master-0 kubenswrapper[33013]: I0313 11:13:44.354955 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-config-data\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.359724 master-0 kubenswrapper[33013]: I0313 11:13:44.355010 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-fernet-keys\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.359724 master-0 kubenswrapper[33013]: I0313 11:13:44.355089 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-combined-ca-bundle\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.396669 master-0 kubenswrapper[33013]: I0313 11:13:44.392934 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-686c8b6b46-vlmv5"] Mar 13 11:13:44.396669 master-0 kubenswrapper[33013]: I0313 11:13:44.395166 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.405709 master-0 kubenswrapper[33013]: I0313 11:13:44.402092 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 13 11:13:44.405709 master-0 kubenswrapper[33013]: I0313 11:13:44.402332 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 13 11:13:44.405709 master-0 kubenswrapper[33013]: I0313 11:13:44.402509 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 13 11:13:44.405709 master-0 kubenswrapper[33013]: I0313 11:13:44.402746 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 13 11:13:44.412756 master-0 kubenswrapper[33013]: I0313 11:13:44.412686 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-686c8b6b46-vlmv5"] Mar 13 11:13:44.457113 master-0 kubenswrapper[33013]: I0313 11:13:44.457039 33013 generic.go:334] "Generic (PLEG): container finished" podID="1bf1cd3c-9327-4a27-aaee-20da3d6111f1" containerID="df8b7aecca1c43c5c971e112ad6f7b143e63fdb896cde4a0afd1504c5ec45448" exitCode=0 Mar 13 11:13:44.457797 master-0 kubenswrapper[33013]: I0313 11:13:44.457157 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-s6b9s" event={"ID":"1bf1cd3c-9327-4a27-aaee-20da3d6111f1","Type":"ContainerDied","Data":"df8b7aecca1c43c5c971e112ad6f7b143e63fdb896cde4a0afd1504c5ec45448"} Mar 13 11:13:44.465114 master-0 kubenswrapper[33013]: I0313 11:13:44.464724 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-combined-ca-bundle\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.465114 master-0 kubenswrapper[33013]: I0313 11:13:44.464792 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3630054b-4002-4cdc-b667-ad4cece7b207-logs\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.465114 master-0 kubenswrapper[33013]: I0313 11:13:44.464942 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-internal-tls-certs\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.465114 master-0 kubenswrapper[33013]: I0313 11:13:44.465034 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-credential-keys\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.465114 master-0 kubenswrapper[33013]: I0313 11:13:44.465074 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-scripts\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.465114 master-0 kubenswrapper[33013]: I0313 11:13:44.465108 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-public-tls-certs\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.465369 master-0 kubenswrapper[33013]: I0313 11:13:44.465155 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-config-data\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.465369 master-0 kubenswrapper[33013]: I0313 11:13:44.465210 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-public-tls-certs\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.465369 master-0 kubenswrapper[33013]: I0313 11:13:44.465277 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-scripts\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.465369 master-0 kubenswrapper[33013]: I0313 11:13:44.465327 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-fernet-keys\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.465568 master-0 kubenswrapper[33013]: I0313 11:13:44.465445 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr6h2\" (UniqueName: \"kubernetes.io/projected/3630054b-4002-4cdc-b667-ad4cece7b207-kube-api-access-wr6h2\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.465568 master-0 kubenswrapper[33013]: I0313 11:13:44.465485 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-combined-ca-bundle\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.465568 master-0 kubenswrapper[33013]: I0313 11:13:44.465544 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-internal-tls-certs\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.467077 master-0 kubenswrapper[33013]: I0313 11:13:44.465573 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-config-data\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.467077 master-0 kubenswrapper[33013]: I0313 11:13:44.465691 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7j6z\" (UniqueName: \"kubernetes.io/projected/f1043f47-656a-4f62-af9e-ca7c7562f2bb-kube-api-access-c7j6z\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.484712 master-0 kubenswrapper[33013]: I0313 11:13:44.482558 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-combined-ca-bundle\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.485738 master-0 kubenswrapper[33013]: I0313 11:13:44.485130 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-internal-tls-certs\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.485738 master-0 kubenswrapper[33013]: I0313 11:13:44.485274 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-credential-keys\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.485738 master-0 kubenswrapper[33013]: I0313 11:13:44.485628 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-scripts\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.491646 master-0 kubenswrapper[33013]: I0313 11:13:44.487000 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"bed3cf98-6b1c-4fb8-b082-57025157fab4","Type":"ContainerStarted","Data":"d776249bd2a7755a857f4fd2cfc9af209747a181aa8ecdb3e2e90a7ed20d0de8"} Mar 13 11:13:44.491646 master-0 kubenswrapper[33013]: I0313 11:13:44.487059 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"bed3cf98-6b1c-4fb8-b082-57025157fab4","Type":"ContainerStarted","Data":"b126e8ed7d274571203637c395cc9ebdcfd10117bd1544521b82b098c7be60dd"} Mar 13 11:13:44.491646 master-0 kubenswrapper[33013]: I0313 11:13:44.488142 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7j6z\" (UniqueName: \"kubernetes.io/projected/f1043f47-656a-4f62-af9e-ca7c7562f2bb-kube-api-access-c7j6z\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.491646 master-0 kubenswrapper[33013]: I0313 11:13:44.488796 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-public-tls-certs\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.491646 master-0 kubenswrapper[33013]: I0313 11:13:44.488871 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-fernet-keys\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.491646 master-0 kubenswrapper[33013]: I0313 11:13:44.489763 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1043f47-656a-4f62-af9e-ca7c7562f2bb-config-data\") pod \"keystone-56f94cc46b-p5gzb\" (UID: \"f1043f47-656a-4f62-af9e-ca7c7562f2bb\") " pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.491646 master-0 kubenswrapper[33013]: I0313 11:13:44.489884 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-external-api-0" event={"ID":"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35","Type":"ContainerStarted","Data":"6486896c62de4a47dd42423a97d3f7223290593fa0980868a9d696c5afd1f3ff"} Mar 13 11:13:44.496795 master-0 kubenswrapper[33013]: I0313 11:13:44.493076 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-db-sync-trrwb" event={"ID":"40e31a77-1481-4eb8-a192-604aad9eaaf8","Type":"ContainerStarted","Data":"7799eb8576742bcf1aebf5f7efab7dddd0824325f99c0d0b290e2cd6d8325644"} Mar 13 11:13:44.537612 master-0 kubenswrapper[33013]: I0313 11:13:44.536657 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ceac4-db-sync-trrwb" podStartSLOduration=4.200864484 podStartE2EDuration="34.536636472s" podCreationTimestamp="2026-03-13 11:13:10 +0000 UTC" firstStartedPulling="2026-03-13 11:13:12.547898442 +0000 UTC m=+976.023851791" lastFinishedPulling="2026-03-13 11:13:42.88367043 +0000 UTC m=+1006.359623779" observedRunningTime="2026-03-13 11:13:44.531708691 +0000 UTC m=+1008.007662040" watchObservedRunningTime="2026-03-13 11:13:44.536636472 +0000 UTC m=+1008.012589821" Mar 13 11:13:44.572334 master-0 kubenswrapper[33013]: I0313 11:13:44.569632 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-public-tls-certs\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.572334 master-0 kubenswrapper[33013]: I0313 11:13:44.569700 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-scripts\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.572334 master-0 kubenswrapper[33013]: I0313 11:13:44.569845 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr6h2\" (UniqueName: \"kubernetes.io/projected/3630054b-4002-4cdc-b667-ad4cece7b207-kube-api-access-wr6h2\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.572334 master-0 kubenswrapper[33013]: I0313 11:13:44.569895 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-internal-tls-certs\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.572334 master-0 kubenswrapper[33013]: I0313 11:13:44.569913 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-config-data\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.572334 master-0 kubenswrapper[33013]: I0313 11:13:44.569994 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-combined-ca-bundle\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.572334 master-0 kubenswrapper[33013]: I0313 11:13:44.570017 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3630054b-4002-4cdc-b667-ad4cece7b207-logs\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.572334 master-0 kubenswrapper[33013]: I0313 11:13:44.571856 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3630054b-4002-4cdc-b667-ad4cece7b207-logs\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.585644 master-0 kubenswrapper[33013]: I0313 11:13:44.582335 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-87aa4-default-internal-api-0" podStartSLOduration=19.582288478 podStartE2EDuration="19.582288478s" podCreationTimestamp="2026-03-13 11:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:44.553050792 +0000 UTC m=+1008.029004141" watchObservedRunningTime="2026-03-13 11:13:44.582288478 +0000 UTC m=+1008.058241827" Mar 13 11:13:44.588646 master-0 kubenswrapper[33013]: I0313 11:13:44.588514 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-public-tls-certs\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.609648 master-0 kubenswrapper[33013]: I0313 11:13:44.608290 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-scripts\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.616718 master-0 kubenswrapper[33013]: I0313 11:13:44.611328 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-internal-tls-certs\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.618430 master-0 kubenswrapper[33013]: I0313 11:13:44.617104 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-87aa4-default-external-api-0" podStartSLOduration=24.617080514 podStartE2EDuration="24.617080514s" podCreationTimestamp="2026-03-13 11:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:44.60783692 +0000 UTC m=+1008.083790269" watchObservedRunningTime="2026-03-13 11:13:44.617080514 +0000 UTC m=+1008.093033863" Mar 13 11:13:44.627134 master-0 kubenswrapper[33013]: I0313 11:13:44.625186 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-config-data\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.627134 master-0 kubenswrapper[33013]: I0313 11:13:44.625972 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:44.633167 master-0 kubenswrapper[33013]: I0313 11:13:44.632013 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-combined-ca-bundle\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.636208 master-0 kubenswrapper[33013]: I0313 11:13:44.636163 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr6h2\" (UniqueName: \"kubernetes.io/projected/3630054b-4002-4cdc-b667-ad4cece7b207-kube-api-access-wr6h2\") pod \"placement-686c8b6b46-vlmv5\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.735731 master-0 kubenswrapper[33013]: I0313 11:13:44.735529 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:44.938919 master-0 kubenswrapper[33013]: I0313 11:13:44.938441 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-59958fcccb-m5c9g"] Mar 13 11:13:44.941819 master-0 kubenswrapper[33013]: I0313 11:13:44.941576 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:44.999781 master-0 kubenswrapper[33013]: I0313 11:13:44.982429 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:44.999781 master-0 kubenswrapper[33013]: I0313 11:13:44.982486 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:45.053035 master-0 kubenswrapper[33013]: I0313 11:13:45.047762 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59958fcccb-m5c9g"] Mar 13 11:13:45.157688 master-0 kubenswrapper[33013]: I0313 11:13:45.147910 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-scripts\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.157688 master-0 kubenswrapper[33013]: I0313 11:13:45.147987 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-combined-ca-bundle\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.157688 master-0 kubenswrapper[33013]: I0313 11:13:45.148065 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-config-data\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.157688 master-0 kubenswrapper[33013]: I0313 11:13:45.148118 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9970c66e-4f7d-432c-8635-a7e19df0c9f8-logs\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.157688 master-0 kubenswrapper[33013]: I0313 11:13:45.148136 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-public-tls-certs\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.157688 master-0 kubenswrapper[33013]: I0313 11:13:45.148177 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-internal-tls-certs\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.157688 master-0 kubenswrapper[33013]: I0313 11:13:45.148207 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdsn2\" (UniqueName: \"kubernetes.io/projected/9970c66e-4f7d-432c-8635-a7e19df0c9f8-kube-api-access-sdsn2\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.228137 master-0 kubenswrapper[33013]: I0313 11:13:45.225491 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:45.251736 master-0 kubenswrapper[33013]: I0313 11:13:45.251340 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-scripts\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.251736 master-0 kubenswrapper[33013]: I0313 11:13:45.251414 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-combined-ca-bundle\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.251736 master-0 kubenswrapper[33013]: I0313 11:13:45.251483 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-config-data\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.251736 master-0 kubenswrapper[33013]: I0313 11:13:45.251523 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9970c66e-4f7d-432c-8635-a7e19df0c9f8-logs\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.251736 master-0 kubenswrapper[33013]: I0313 11:13:45.251541 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-public-tls-certs\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.271928 master-0 kubenswrapper[33013]: I0313 11:13:45.253369 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-internal-tls-certs\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.271928 master-0 kubenswrapper[33013]: I0313 11:13:45.253430 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdsn2\" (UniqueName: \"kubernetes.io/projected/9970c66e-4f7d-432c-8635-a7e19df0c9f8-kube-api-access-sdsn2\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.271928 master-0 kubenswrapper[33013]: I0313 11:13:45.259092 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9970c66e-4f7d-432c-8635-a7e19df0c9f8-logs\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.294618 master-0 kubenswrapper[33013]: I0313 11:13:45.277707 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-config-data\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.294618 master-0 kubenswrapper[33013]: I0313 11:13:45.278027 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:45.294618 master-0 kubenswrapper[33013]: I0313 11:13:45.278431 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-internal-tls-certs\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.294618 master-0 kubenswrapper[33013]: I0313 11:13:45.281274 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-scripts\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.294618 master-0 kubenswrapper[33013]: I0313 11:13:45.294606 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-combined-ca-bundle\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.296352 master-0 kubenswrapper[33013]: I0313 11:13:45.296268 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdsn2\" (UniqueName: \"kubernetes.io/projected/9970c66e-4f7d-432c-8635-a7e19df0c9f8-kube-api-access-sdsn2\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.299438 master-0 kubenswrapper[33013]: I0313 11:13:45.299336 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9970c66e-4f7d-432c-8635-a7e19df0c9f8-public-tls-certs\") pod \"placement-59958fcccb-m5c9g\" (UID: \"9970c66e-4f7d-432c-8635-a7e19df0c9f8\") " pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.349744 master-0 kubenswrapper[33013]: I0313 11:13:45.349661 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56f94cc46b-p5gzb"] Mar 13 11:13:45.390689 master-0 kubenswrapper[33013]: I0313 11:13:45.382134 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:45.523611 master-0 kubenswrapper[33013]: I0313 11:13:45.522683 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-s6b9s" event={"ID":"1bf1cd3c-9327-4a27-aaee-20da3d6111f1","Type":"ContainerStarted","Data":"c113f3406a1c342fd1f464e66bec3ef02c626a4b37e1ce92433b2cf7cf2ef162"} Mar 13 11:13:45.530694 master-0 kubenswrapper[33013]: I0313 11:13:45.530642 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56f94cc46b-p5gzb" event={"ID":"f1043f47-656a-4f62-af9e-ca7c7562f2bb","Type":"ContainerStarted","Data":"2bb65e22068d468976f2c13ddc9b57207fa15d805aabe3d8e3947019f4c58caa"} Mar 13 11:13:45.533271 master-0 kubenswrapper[33013]: I0313 11:13:45.532256 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:45.533271 master-0 kubenswrapper[33013]: I0313 11:13:45.532318 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:45.575044 master-0 kubenswrapper[33013]: I0313 11:13:45.569514 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-686c8b6b46-vlmv5"] Mar 13 11:13:45.581606 master-0 kubenswrapper[33013]: I0313 11:13:45.573167 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-s6b9s" podStartSLOduration=3.7418346270000002 podStartE2EDuration="24.573146231s" podCreationTimestamp="2026-03-13 11:13:21 +0000 UTC" firstStartedPulling="2026-03-13 11:13:22.052783638 +0000 UTC m=+985.528736987" lastFinishedPulling="2026-03-13 11:13:42.884095242 +0000 UTC m=+1006.360048591" observedRunningTime="2026-03-13 11:13:45.562557588 +0000 UTC m=+1009.038510937" watchObservedRunningTime="2026-03-13 11:13:45.573146231 +0000 UTC m=+1009.049099580" Mar 13 11:13:46.162731 master-0 kubenswrapper[33013]: I0313 11:13:46.158107 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59958fcccb-m5c9g"] Mar 13 11:13:46.169697 master-0 kubenswrapper[33013]: W0313 11:13:46.169563 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9970c66e_4f7d_432c_8635_a7e19df0c9f8.slice/crio-5da6b5ca0aeb7cb7a93cf4a979cd42296487c3d084dba0a877aa827c5b8fa221 WatchSource:0}: Error finding container 5da6b5ca0aeb7cb7a93cf4a979cd42296487c3d084dba0a877aa827c5b8fa221: Status 404 returned error can't find the container with id 5da6b5ca0aeb7cb7a93cf4a979cd42296487c3d084dba0a877aa827c5b8fa221 Mar 13 11:13:46.545825 master-0 kubenswrapper[33013]: I0313 11:13:46.545754 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-686c8b6b46-vlmv5" event={"ID":"3630054b-4002-4cdc-b667-ad4cece7b207","Type":"ContainerStarted","Data":"881f2d9d76ce2aa5c00a7f7ec95f29b43793e7ac713363e8fbfb81ad7dfd2f40"} Mar 13 11:13:46.545825 master-0 kubenswrapper[33013]: I0313 11:13:46.545818 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-686c8b6b46-vlmv5" event={"ID":"3630054b-4002-4cdc-b667-ad4cece7b207","Type":"ContainerStarted","Data":"fcc5a610adc6d0d7a7fca8489f5839daa731e26ad101cd880b65e8a404682502"} Mar 13 11:13:46.545825 master-0 kubenswrapper[33013]: I0313 11:13:46.545837 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-686c8b6b46-vlmv5" event={"ID":"3630054b-4002-4cdc-b667-ad4cece7b207","Type":"ContainerStarted","Data":"658e312ea5140366337197ba2dc803f72f06c2836e27def07efa2bc9837d0ccf"} Mar 13 11:13:46.548202 master-0 kubenswrapper[33013]: I0313 11:13:46.548166 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56f94cc46b-p5gzb" event={"ID":"f1043f47-656a-4f62-af9e-ca7c7562f2bb","Type":"ContainerStarted","Data":"7a5dcd9fd38195353d26571732bd58d332ede59a879a5fe64d68d4d52497cf22"} Mar 13 11:13:46.548265 master-0 kubenswrapper[33013]: I0313 11:13:46.548214 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:13:46.553928 master-0 kubenswrapper[33013]: I0313 11:13:46.553269 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59958fcccb-m5c9g" event={"ID":"9970c66e-4f7d-432c-8635-a7e19df0c9f8","Type":"ContainerStarted","Data":"5da6b5ca0aeb7cb7a93cf4a979cd42296487c3d084dba0a877aa827c5b8fa221"} Mar 13 11:13:46.675611 master-0 kubenswrapper[33013]: I0313 11:13:46.667871 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-56f94cc46b-p5gzb" podStartSLOduration=2.667841464 podStartE2EDuration="2.667841464s" podCreationTimestamp="2026-03-13 11:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:46.641309475 +0000 UTC m=+1010.117262824" watchObservedRunningTime="2026-03-13 11:13:46.667841464 +0000 UTC m=+1010.143794833" Mar 13 11:13:47.575719 master-0 kubenswrapper[33013]: I0313 11:13:47.568721 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59958fcccb-m5c9g" event={"ID":"9970c66e-4f7d-432c-8635-a7e19df0c9f8","Type":"ContainerStarted","Data":"b6b0149a5bccf54b6c49fcf221768673b8aff6ed9fce8094bb84fd2d3ebd57a2"} Mar 13 11:13:47.575719 master-0 kubenswrapper[33013]: I0313 11:13:47.568767 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59958fcccb-m5c9g" event={"ID":"9970c66e-4f7d-432c-8635-a7e19df0c9f8","Type":"ContainerStarted","Data":"7f1260aa6a31caa6af951a5bf3640114ca9d8a22fc59604fa62438c11b2bc693"} Mar 13 11:13:47.575719 master-0 kubenswrapper[33013]: I0313 11:13:47.568781 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:47.575719 master-0 kubenswrapper[33013]: I0313 11:13:47.569285 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:47.575719 master-0 kubenswrapper[33013]: I0313 11:13:47.569311 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:13:47.575719 master-0 kubenswrapper[33013]: I0313 11:13:47.569330 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:13:47.666599 master-0 kubenswrapper[33013]: I0313 11:13:47.666480 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-59958fcccb-m5c9g" podStartSLOduration=3.666456869 podStartE2EDuration="3.666456869s" podCreationTimestamp="2026-03-13 11:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:47.653447006 +0000 UTC m=+1011.129400365" watchObservedRunningTime="2026-03-13 11:13:47.666456869 +0000 UTC m=+1011.142410218" Mar 13 11:13:47.670021 master-0 kubenswrapper[33013]: I0313 11:13:47.669940 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-686c8b6b46-vlmv5" podStartSLOduration=3.669921738 podStartE2EDuration="3.669921738s" podCreationTimestamp="2026-03-13 11:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:47.609137999 +0000 UTC m=+1011.085091358" watchObservedRunningTime="2026-03-13 11:13:47.669921738 +0000 UTC m=+1011.145875087" Mar 13 11:13:48.084384 master-0 kubenswrapper[33013]: I0313 11:13:48.084126 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:48.458461 master-0 kubenswrapper[33013]: I0313 11:13:48.458138 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:48.458461 master-0 kubenswrapper[33013]: I0313 11:13:48.458283 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:48.490476 master-0 kubenswrapper[33013]: I0313 11:13:48.489841 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:48.507785 master-0 kubenswrapper[33013]: I0313 11:13:48.507738 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:48.584244 master-0 kubenswrapper[33013]: I0313 11:13:48.584176 33013 generic.go:334] "Generic (PLEG): container finished" podID="24aec7cb-081e-4a89-80bb-b11d4e085557" containerID="61a0b5b5445d45b58f01dcc32d8fe030dd270a4b6d133b800bec37de73e0d424" exitCode=0 Mar 13 11:13:48.586639 master-0 kubenswrapper[33013]: I0313 11:13:48.584406 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4x5s9" event={"ID":"24aec7cb-081e-4a89-80bb-b11d4e085557","Type":"ContainerDied","Data":"61a0b5b5445d45b58f01dcc32d8fe030dd270a4b6d133b800bec37de73e0d424"} Mar 13 11:13:48.586639 master-0 kubenswrapper[33013]: I0313 11:13:48.584940 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:48.586639 master-0 kubenswrapper[33013]: I0313 11:13:48.584962 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:50.043550 master-0 kubenswrapper[33013]: I0313 11:13:50.043466 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4x5s9" Mar 13 11:13:50.108781 master-0 kubenswrapper[33013]: I0313 11:13:50.108535 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kswhl\" (UniqueName: \"kubernetes.io/projected/24aec7cb-081e-4a89-80bb-b11d4e085557-kube-api-access-kswhl\") pod \"24aec7cb-081e-4a89-80bb-b11d4e085557\" (UID: \"24aec7cb-081e-4a89-80bb-b11d4e085557\") " Mar 13 11:13:50.108781 master-0 kubenswrapper[33013]: I0313 11:13:50.108732 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/24aec7cb-081e-4a89-80bb-b11d4e085557-config\") pod \"24aec7cb-081e-4a89-80bb-b11d4e085557\" (UID: \"24aec7cb-081e-4a89-80bb-b11d4e085557\") " Mar 13 11:13:50.108781 master-0 kubenswrapper[33013]: I0313 11:13:50.108793 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24aec7cb-081e-4a89-80bb-b11d4e085557-combined-ca-bundle\") pod \"24aec7cb-081e-4a89-80bb-b11d4e085557\" (UID: \"24aec7cb-081e-4a89-80bb-b11d4e085557\") " Mar 13 11:13:50.117112 master-0 kubenswrapper[33013]: I0313 11:13:50.117063 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24aec7cb-081e-4a89-80bb-b11d4e085557-kube-api-access-kswhl" (OuterVolumeSpecName: "kube-api-access-kswhl") pod "24aec7cb-081e-4a89-80bb-b11d4e085557" (UID: "24aec7cb-081e-4a89-80bb-b11d4e085557"). InnerVolumeSpecName "kube-api-access-kswhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:50.142195 master-0 kubenswrapper[33013]: I0313 11:13:50.142131 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24aec7cb-081e-4a89-80bb-b11d4e085557-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24aec7cb-081e-4a89-80bb-b11d4e085557" (UID: "24aec7cb-081e-4a89-80bb-b11d4e085557"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:50.150344 master-0 kubenswrapper[33013]: I0313 11:13:50.150279 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24aec7cb-081e-4a89-80bb-b11d4e085557-config" (OuterVolumeSpecName: "config") pod "24aec7cb-081e-4a89-80bb-b11d4e085557" (UID: "24aec7cb-081e-4a89-80bb-b11d4e085557"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:50.211973 master-0 kubenswrapper[33013]: I0313 11:13:50.211920 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kswhl\" (UniqueName: \"kubernetes.io/projected/24aec7cb-081e-4a89-80bb-b11d4e085557-kube-api-access-kswhl\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:50.212259 master-0 kubenswrapper[33013]: I0313 11:13:50.212242 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/24aec7cb-081e-4a89-80bb-b11d4e085557-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:50.212359 master-0 kubenswrapper[33013]: I0313 11:13:50.212346 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24aec7cb-081e-4a89-80bb-b11d4e085557-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:50.481357 master-0 kubenswrapper[33013]: I0313 11:13:50.481228 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:13:50.638628 master-0 kubenswrapper[33013]: I0313 11:13:50.637353 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4x5s9" event={"ID":"24aec7cb-081e-4a89-80bb-b11d4e085557","Type":"ContainerDied","Data":"355180f1a9cbd89b40cc9fcd41b62ffbcdad881774ab5c331d89cca441cdc526"} Mar 13 11:13:50.638628 master-0 kubenswrapper[33013]: I0313 11:13:50.637416 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="355180f1a9cbd89b40cc9fcd41b62ffbcdad881774ab5c331d89cca441cdc526" Mar 13 11:13:50.638628 master-0 kubenswrapper[33013]: I0313 11:13:50.637485 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4x5s9" Mar 13 11:13:50.800975 master-0 kubenswrapper[33013]: I0313 11:13:50.800925 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:50.966575 master-0 kubenswrapper[33013]: I0313 11:13:50.966494 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cf64fcfbc-zzxl9"] Mar 13 11:13:50.967805 master-0 kubenswrapper[33013]: E0313 11:13:50.967783 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24aec7cb-081e-4a89-80bb-b11d4e085557" containerName="neutron-db-sync" Mar 13 11:13:50.967940 master-0 kubenswrapper[33013]: I0313 11:13:50.967927 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="24aec7cb-081e-4a89-80bb-b11d4e085557" containerName="neutron-db-sync" Mar 13 11:13:50.968361 master-0 kubenswrapper[33013]: I0313 11:13:50.968344 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="24aec7cb-081e-4a89-80bb-b11d4e085557" containerName="neutron-db-sync" Mar 13 11:13:50.970245 master-0 kubenswrapper[33013]: I0313 11:13:50.970218 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:50.999804 master-0 kubenswrapper[33013]: I0313 11:13:50.980315 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf64fcfbc-zzxl9"] Mar 13 11:13:51.144526 master-0 kubenswrapper[33013]: I0313 11:13:51.144134 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-config\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.144526 master-0 kubenswrapper[33013]: I0313 11:13:51.144197 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-dns-svc\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.144526 master-0 kubenswrapper[33013]: I0313 11:13:51.144308 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n6l4\" (UniqueName: \"kubernetes.io/projected/3d7d1181-e58a-41be-af8d-c209ff199f13-kube-api-access-5n6l4\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.144526 master-0 kubenswrapper[33013]: I0313 11:13:51.144353 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.144526 master-0 kubenswrapper[33013]: I0313 11:13:51.144487 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.144526 master-0 kubenswrapper[33013]: I0313 11:13:51.144528 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.149707 master-0 kubenswrapper[33013]: I0313 11:13:51.146781 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-599ddd56fb-m48bv"] Mar 13 11:13:51.149707 master-0 kubenswrapper[33013]: I0313 11:13:51.149233 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.153609 master-0 kubenswrapper[33013]: I0313 11:13:51.152048 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 13 11:13:51.153609 master-0 kubenswrapper[33013]: I0313 11:13:51.152962 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 13 11:13:51.156133 master-0 kubenswrapper[33013]: I0313 11:13:51.156104 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 13 11:13:51.177141 master-0 kubenswrapper[33013]: I0313 11:13:51.177077 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-599ddd56fb-m48bv"] Mar 13 11:13:51.246178 master-0 kubenswrapper[33013]: I0313 11:13:51.246101 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-config\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.246178 master-0 kubenswrapper[33013]: I0313 11:13:51.246159 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-dns-svc\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.246178 master-0 kubenswrapper[33013]: I0313 11:13:51.246194 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-httpd-config\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.246545 master-0 kubenswrapper[33013]: I0313 11:13:51.246262 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n6l4\" (UniqueName: \"kubernetes.io/projected/3d7d1181-e58a-41be-af8d-c209ff199f13-kube-api-access-5n6l4\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.246545 master-0 kubenswrapper[33013]: I0313 11:13:51.246290 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwkbh\" (UniqueName: \"kubernetes.io/projected/142b9e51-cb04-42ce-b6f5-b0554d9585a2-kube-api-access-qwkbh\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.246545 master-0 kubenswrapper[33013]: I0313 11:13:51.246310 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.246545 master-0 kubenswrapper[33013]: I0313 11:13:51.246359 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-ovndb-tls-certs\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.246545 master-0 kubenswrapper[33013]: I0313 11:13:51.246423 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-config\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.246545 master-0 kubenswrapper[33013]: I0313 11:13:51.246443 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.246545 master-0 kubenswrapper[33013]: I0313 11:13:51.246490 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.246545 master-0 kubenswrapper[33013]: I0313 11:13:51.246530 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-combined-ca-bundle\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.251016 master-0 kubenswrapper[33013]: I0313 11:13:51.247504 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.251016 master-0 kubenswrapper[33013]: I0313 11:13:51.247687 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-dns-svc\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.251016 master-0 kubenswrapper[33013]: I0313 11:13:51.247732 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-config\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.253138 master-0 kubenswrapper[33013]: I0313 11:13:51.251567 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.253138 master-0 kubenswrapper[33013]: I0313 11:13:51.251569 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.271806 master-0 kubenswrapper[33013]: I0313 11:13:51.271712 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n6l4\" (UniqueName: \"kubernetes.io/projected/3d7d1181-e58a-41be-af8d-c209ff199f13-kube-api-access-5n6l4\") pod \"dnsmasq-dns-6cf64fcfbc-zzxl9\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.321442 master-0 kubenswrapper[33013]: I0313 11:13:51.321373 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:51.349696 master-0 kubenswrapper[33013]: I0313 11:13:51.349630 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-httpd-config\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.350213 master-0 kubenswrapper[33013]: I0313 11:13:51.350188 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwkbh\" (UniqueName: \"kubernetes.io/projected/142b9e51-cb04-42ce-b6f5-b0554d9585a2-kube-api-access-qwkbh\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.350417 master-0 kubenswrapper[33013]: I0313 11:13:51.350398 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-ovndb-tls-certs\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.350688 master-0 kubenswrapper[33013]: I0313 11:13:51.350664 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-config\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.350896 master-0 kubenswrapper[33013]: I0313 11:13:51.350875 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-combined-ca-bundle\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.354381 master-0 kubenswrapper[33013]: I0313 11:13:51.354333 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-httpd-config\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.354544 master-0 kubenswrapper[33013]: I0313 11:13:51.354433 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-ovndb-tls-certs\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.357669 master-0 kubenswrapper[33013]: I0313 11:13:51.357629 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-config\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.358226 master-0 kubenswrapper[33013]: I0313 11:13:51.358180 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-combined-ca-bundle\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.374768 master-0 kubenswrapper[33013]: I0313 11:13:51.374712 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwkbh\" (UniqueName: \"kubernetes.io/projected/142b9e51-cb04-42ce-b6f5-b0554d9585a2-kube-api-access-qwkbh\") pod \"neutron-599ddd56fb-m48bv\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.492674 master-0 kubenswrapper[33013]: I0313 11:13:51.492604 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:51.889774 master-0 kubenswrapper[33013]: I0313 11:13:51.889702 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf64fcfbc-zzxl9"] Mar 13 11:13:52.162264 master-0 kubenswrapper[33013]: I0313 11:13:52.160414 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:13:52.320828 master-0 kubenswrapper[33013]: W0313 11:13:52.320603 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod142b9e51_cb04_42ce_b6f5_b0554d9585a2.slice/crio-6af1f0e61e74405088c3162374d373c94ca601a4a6aacff5fcdb66babfa11985 WatchSource:0}: Error finding container 6af1f0e61e74405088c3162374d373c94ca601a4a6aacff5fcdb66babfa11985: Status 404 returned error can't find the container with id 6af1f0e61e74405088c3162374d373c94ca601a4a6aacff5fcdb66babfa11985 Mar 13 11:13:52.322546 master-0 kubenswrapper[33013]: I0313 11:13:52.322491 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-599ddd56fb-m48bv"] Mar 13 11:13:52.669610 master-0 kubenswrapper[33013]: I0313 11:13:52.669288 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599ddd56fb-m48bv" event={"ID":"142b9e51-cb04-42ce-b6f5-b0554d9585a2","Type":"ContainerStarted","Data":"1493f263662fa53e88cc5d65bedcd38bf9d16f06a98ec33e3cf9ff571b46d841"} Mar 13 11:13:52.669610 master-0 kubenswrapper[33013]: I0313 11:13:52.669347 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599ddd56fb-m48bv" event={"ID":"142b9e51-cb04-42ce-b6f5-b0554d9585a2","Type":"ContainerStarted","Data":"6af1f0e61e74405088c3162374d373c94ca601a4a6aacff5fcdb66babfa11985"} Mar 13 11:13:52.673616 master-0 kubenswrapper[33013]: I0313 11:13:52.672233 33013 generic.go:334] "Generic (PLEG): container finished" podID="3d7d1181-e58a-41be-af8d-c209ff199f13" containerID="de73f389128b0feb349e451e751dd83b09fae78d8db8dff37b78a347aa965355" exitCode=0 Mar 13 11:13:52.673616 master-0 kubenswrapper[33013]: I0313 11:13:52.672288 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" event={"ID":"3d7d1181-e58a-41be-af8d-c209ff199f13","Type":"ContainerDied","Data":"de73f389128b0feb349e451e751dd83b09fae78d8db8dff37b78a347aa965355"} Mar 13 11:13:52.673616 master-0 kubenswrapper[33013]: I0313 11:13:52.672317 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" event={"ID":"3d7d1181-e58a-41be-af8d-c209ff199f13","Type":"ContainerStarted","Data":"749591acab914c8606ff2c09684fc446f49940d251b6a3d77ed560fec455bf81"} Mar 13 11:13:53.567385 master-0 kubenswrapper[33013]: I0313 11:13:53.567311 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c5777cc85-7p8mx"] Mar 13 11:13:53.569715 master-0 kubenswrapper[33013]: I0313 11:13:53.569677 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.572012 master-0 kubenswrapper[33013]: I0313 11:13:53.571968 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 13 11:13:53.572157 master-0 kubenswrapper[33013]: I0313 11:13:53.571988 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 13 11:13:53.597254 master-0 kubenswrapper[33013]: I0313 11:13:53.597174 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c5777cc85-7p8mx"] Mar 13 11:13:53.664902 master-0 kubenswrapper[33013]: I0313 11:13:53.660710 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-ovndb-tls-certs\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.664902 master-0 kubenswrapper[33013]: I0313 11:13:53.660815 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-public-tls-certs\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.664902 master-0 kubenswrapper[33013]: I0313 11:13:53.660851 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-internal-tls-certs\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.664902 master-0 kubenswrapper[33013]: I0313 11:13:53.660876 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scnw8\" (UniqueName: \"kubernetes.io/projected/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-kube-api-access-scnw8\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.664902 master-0 kubenswrapper[33013]: I0313 11:13:53.660900 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-httpd-config\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.664902 master-0 kubenswrapper[33013]: I0313 11:13:53.661042 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-combined-ca-bundle\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.664902 master-0 kubenswrapper[33013]: I0313 11:13:53.661241 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-config\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.698656 master-0 kubenswrapper[33013]: I0313 11:13:53.695249 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" event={"ID":"3d7d1181-e58a-41be-af8d-c209ff199f13","Type":"ContainerStarted","Data":"2c9c7846275649177a123dec6a23389fc78b5cd20e22e947b8f23760da34c563"} Mar 13 11:13:53.698656 master-0 kubenswrapper[33013]: I0313 11:13:53.698499 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:53.723688 master-0 kubenswrapper[33013]: I0313 11:13:53.720373 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599ddd56fb-m48bv" event={"ID":"142b9e51-cb04-42ce-b6f5-b0554d9585a2","Type":"ContainerStarted","Data":"74c6758cb83b178f150a2428138bc6dd87b3665d5f497b6843db0a0dcb9d6546"} Mar 13 11:13:53.723688 master-0 kubenswrapper[33013]: I0313 11:13:53.721952 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:13:53.746195 master-0 kubenswrapper[33013]: I0313 11:13:53.746081 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" podStartSLOduration=3.74604666 podStartE2EDuration="3.74604666s" podCreationTimestamp="2026-03-13 11:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:53.740951044 +0000 UTC m=+1017.216904383" watchObservedRunningTime="2026-03-13 11:13:53.74604666 +0000 UTC m=+1017.222000019" Mar 13 11:13:53.765145 master-0 kubenswrapper[33013]: I0313 11:13:53.763883 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-config\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.765145 master-0 kubenswrapper[33013]: I0313 11:13:53.764045 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-ovndb-tls-certs\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.765145 master-0 kubenswrapper[33013]: I0313 11:13:53.764106 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-public-tls-certs\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.765145 master-0 kubenswrapper[33013]: I0313 11:13:53.764139 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-internal-tls-certs\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.765145 master-0 kubenswrapper[33013]: I0313 11:13:53.764161 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scnw8\" (UniqueName: \"kubernetes.io/projected/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-kube-api-access-scnw8\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.765145 master-0 kubenswrapper[33013]: I0313 11:13:53.764185 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-httpd-config\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.765145 master-0 kubenswrapper[33013]: I0313 11:13:53.764222 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-combined-ca-bundle\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.770606 master-0 kubenswrapper[33013]: I0313 11:13:53.770432 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-httpd-config\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.771649 master-0 kubenswrapper[33013]: I0313 11:13:53.771514 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-ovndb-tls-certs\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.781614 master-0 kubenswrapper[33013]: I0313 11:13:53.775521 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-public-tls-certs\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.781614 master-0 kubenswrapper[33013]: I0313 11:13:53.778012 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-internal-tls-certs\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.781614 master-0 kubenswrapper[33013]: I0313 11:13:53.781254 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-config\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.789166 master-0 kubenswrapper[33013]: I0313 11:13:53.784358 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-combined-ca-bundle\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.798286 master-0 kubenswrapper[33013]: I0313 11:13:53.797893 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scnw8\" (UniqueName: \"kubernetes.io/projected/94d0d52c-43f2-4ca6-a11c-0cdc68e4465b-kube-api-access-scnw8\") pod \"neutron-c5777cc85-7p8mx\" (UID: \"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b\") " pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:53.893178 master-0 kubenswrapper[33013]: I0313 11:13:53.893019 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:54.093759 master-0 kubenswrapper[33013]: E0313 11:13:54.093679 33013 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40e31a77_1481_4eb8_a192_604aad9eaaf8.slice/crio-conmon-7799eb8576742bcf1aebf5f7efab7dddd0824325f99c0d0b290e2cd6d8325644.scope\": RecentStats: unable to find data in memory cache]" Mar 13 11:13:54.472953 master-0 kubenswrapper[33013]: I0313 11:13:54.471574 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-599ddd56fb-m48bv" podStartSLOduration=3.471546339 podStartE2EDuration="3.471546339s" podCreationTimestamp="2026-03-13 11:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:53.779880788 +0000 UTC m=+1017.255834137" watchObservedRunningTime="2026-03-13 11:13:54.471546339 +0000 UTC m=+1017.947499688" Mar 13 11:13:54.477173 master-0 kubenswrapper[33013]: I0313 11:13:54.477102 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c5777cc85-7p8mx"] Mar 13 11:13:54.487722 master-0 kubenswrapper[33013]: W0313 11:13:54.487663 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94d0d52c_43f2_4ca6_a11c_0cdc68e4465b.slice/crio-4d1e6f1172a35e4d41737d66d0779d598d77f4ed03d5ba22ec595e83d1186867 WatchSource:0}: Error finding container 4d1e6f1172a35e4d41737d66d0779d598d77f4ed03d5ba22ec595e83d1186867: Status 404 returned error can't find the container with id 4d1e6f1172a35e4d41737d66d0779d598d77f4ed03d5ba22ec595e83d1186867 Mar 13 11:13:54.747444 master-0 kubenswrapper[33013]: I0313 11:13:54.745222 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c5777cc85-7p8mx" event={"ID":"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b","Type":"ContainerStarted","Data":"4d1e6f1172a35e4d41737d66d0779d598d77f4ed03d5ba22ec595e83d1186867"} Mar 13 11:13:54.751245 master-0 kubenswrapper[33013]: I0313 11:13:54.751183 33013 generic.go:334] "Generic (PLEG): container finished" podID="40e31a77-1481-4eb8-a192-604aad9eaaf8" containerID="7799eb8576742bcf1aebf5f7efab7dddd0824325f99c0d0b290e2cd6d8325644" exitCode=0 Mar 13 11:13:54.751395 master-0 kubenswrapper[33013]: I0313 11:13:54.751273 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-db-sync-trrwb" event={"ID":"40e31a77-1481-4eb8-a192-604aad9eaaf8","Type":"ContainerDied","Data":"7799eb8576742bcf1aebf5f7efab7dddd0824325f99c0d0b290e2cd6d8325644"} Mar 13 11:13:55.773718 master-0 kubenswrapper[33013]: I0313 11:13:55.773638 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c5777cc85-7p8mx" event={"ID":"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b","Type":"ContainerStarted","Data":"1616b48cbf63736da57e69cc005d3aebfe1e44a7c674b5aecc2c43fcfc6b5ea7"} Mar 13 11:13:55.773718 master-0 kubenswrapper[33013]: I0313 11:13:55.773716 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c5777cc85-7p8mx" event={"ID":"94d0d52c-43f2-4ca6-a11c-0cdc68e4465b","Type":"ContainerStarted","Data":"682c813e47bc123b50dd24546a9c4e283fa423c8b367c4a0ba03ec93e6117c17"} Mar 13 11:13:55.774397 master-0 kubenswrapper[33013]: I0313 11:13:55.773760 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:13:55.844262 master-0 kubenswrapper[33013]: I0313 11:13:55.839157 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c5777cc85-7p8mx" podStartSLOduration=2.8391378019999998 podStartE2EDuration="2.839137802s" podCreationTimestamp="2026-03-13 11:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:13:55.808007381 +0000 UTC m=+1019.283960740" watchObservedRunningTime="2026-03-13 11:13:55.839137802 +0000 UTC m=+1019.315091151" Mar 13 11:13:56.185246 master-0 kubenswrapper[33013]: I0313 11:13:56.184487 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:56.258136 master-0 kubenswrapper[33013]: I0313 11:13:56.258054 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40e31a77-1481-4eb8-a192-604aad9eaaf8-etc-machine-id\") pod \"40e31a77-1481-4eb8-a192-604aad9eaaf8\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " Mar 13 11:13:56.258441 master-0 kubenswrapper[33013]: I0313 11:13:56.258368 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-config-data\") pod \"40e31a77-1481-4eb8-a192-604aad9eaaf8\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " Mar 13 11:13:56.258529 master-0 kubenswrapper[33013]: I0313 11:13:56.258472 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e31a77-1481-4eb8-a192-604aad9eaaf8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "40e31a77-1481-4eb8-a192-604aad9eaaf8" (UID: "40e31a77-1481-4eb8-a192-604aad9eaaf8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:13:56.258690 master-0 kubenswrapper[33013]: I0313 11:13:56.258499 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-db-sync-config-data\") pod \"40e31a77-1481-4eb8-a192-604aad9eaaf8\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " Mar 13 11:13:56.258808 master-0 kubenswrapper[33013]: I0313 11:13:56.258778 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-scripts\") pod \"40e31a77-1481-4eb8-a192-604aad9eaaf8\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " Mar 13 11:13:56.258918 master-0 kubenswrapper[33013]: I0313 11:13:56.258893 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-combined-ca-bundle\") pod \"40e31a77-1481-4eb8-a192-604aad9eaaf8\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " Mar 13 11:13:56.258998 master-0 kubenswrapper[33013]: I0313 11:13:56.258972 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9mr8\" (UniqueName: \"kubernetes.io/projected/40e31a77-1481-4eb8-a192-604aad9eaaf8-kube-api-access-q9mr8\") pod \"40e31a77-1481-4eb8-a192-604aad9eaaf8\" (UID: \"40e31a77-1481-4eb8-a192-604aad9eaaf8\") " Mar 13 11:13:56.260746 master-0 kubenswrapper[33013]: I0313 11:13:56.260718 33013 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40e31a77-1481-4eb8-a192-604aad9eaaf8-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:56.263535 master-0 kubenswrapper[33013]: I0313 11:13:56.263289 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-scripts" (OuterVolumeSpecName: "scripts") pod "40e31a77-1481-4eb8-a192-604aad9eaaf8" (UID: "40e31a77-1481-4eb8-a192-604aad9eaaf8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:56.263535 master-0 kubenswrapper[33013]: I0313 11:13:56.263479 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e31a77-1481-4eb8-a192-604aad9eaaf8-kube-api-access-q9mr8" (OuterVolumeSpecName: "kube-api-access-q9mr8") pod "40e31a77-1481-4eb8-a192-604aad9eaaf8" (UID: "40e31a77-1481-4eb8-a192-604aad9eaaf8"). InnerVolumeSpecName "kube-api-access-q9mr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:56.277535 master-0 kubenswrapper[33013]: I0313 11:13:56.277450 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "40e31a77-1481-4eb8-a192-604aad9eaaf8" (UID: "40e31a77-1481-4eb8-a192-604aad9eaaf8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:56.292728 master-0 kubenswrapper[33013]: I0313 11:13:56.292561 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40e31a77-1481-4eb8-a192-604aad9eaaf8" (UID: "40e31a77-1481-4eb8-a192-604aad9eaaf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:56.325756 master-0 kubenswrapper[33013]: I0313 11:13:56.325675 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-config-data" (OuterVolumeSpecName: "config-data") pod "40e31a77-1481-4eb8-a192-604aad9eaaf8" (UID: "40e31a77-1481-4eb8-a192-604aad9eaaf8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:13:56.375196 master-0 kubenswrapper[33013]: I0313 11:13:56.363281 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9mr8\" (UniqueName: \"kubernetes.io/projected/40e31a77-1481-4eb8-a192-604aad9eaaf8-kube-api-access-q9mr8\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:56.375196 master-0 kubenswrapper[33013]: I0313 11:13:56.363344 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:56.375196 master-0 kubenswrapper[33013]: I0313 11:13:56.363358 33013 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:56.375196 master-0 kubenswrapper[33013]: I0313 11:13:56.363369 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:56.375196 master-0 kubenswrapper[33013]: I0313 11:13:56.363378 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e31a77-1481-4eb8-a192-604aad9eaaf8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:56.788030 master-0 kubenswrapper[33013]: I0313 11:13:56.787084 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-db-sync-trrwb" Mar 13 11:13:56.788030 master-0 kubenswrapper[33013]: I0313 11:13:56.787064 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-db-sync-trrwb" event={"ID":"40e31a77-1481-4eb8-a192-604aad9eaaf8","Type":"ContainerDied","Data":"eaaf2bc7594cc40f31eadd66e2f3a2c42e747f957a27fb27ffab0649c9d698d9"} Mar 13 11:13:56.788030 master-0 kubenswrapper[33013]: I0313 11:13:56.787171 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaaf2bc7594cc40f31eadd66e2f3a2c42e747f957a27fb27ffab0649c9d698d9" Mar 13 11:13:57.264580 master-0 kubenswrapper[33013]: I0313 11:13:57.263391 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ceac4-scheduler-0"] Mar 13 11:13:57.274042 master-0 kubenswrapper[33013]: E0313 11:13:57.273546 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e31a77-1481-4eb8-a192-604aad9eaaf8" containerName="cinder-ceac4-db-sync" Mar 13 11:13:57.274042 master-0 kubenswrapper[33013]: I0313 11:13:57.273725 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e31a77-1481-4eb8-a192-604aad9eaaf8" containerName="cinder-ceac4-db-sync" Mar 13 11:13:57.275079 master-0 kubenswrapper[33013]: I0313 11:13:57.275054 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e31a77-1481-4eb8-a192-604aad9eaaf8" containerName="cinder-ceac4-db-sync" Mar 13 11:13:57.282296 master-0 kubenswrapper[33013]: I0313 11:13:57.281394 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.294014 master-0 kubenswrapper[33013]: I0313 11:13:57.290379 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-scheduler-0"] Mar 13 11:13:57.327683 master-0 kubenswrapper[33013]: I0313 11:13:57.323306 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ceac4-scripts" Mar 13 11:13:57.327683 master-0 kubenswrapper[33013]: I0313 11:13:57.323733 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ceac4-config-data" Mar 13 11:13:57.327683 master-0 kubenswrapper[33013]: I0313 11:13:57.323922 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ceac4-scheduler-config-data" Mar 13 11:13:57.486773 master-0 kubenswrapper[33013]: I0313 11:13:57.486701 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/616c0183-dbe2-4f18-8391-0d1772b7f375-etc-machine-id\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.487080 master-0 kubenswrapper[33013]: I0313 11:13:57.486902 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-scripts\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.487080 master-0 kubenswrapper[33013]: I0313 11:13:57.486957 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-combined-ca-bundle\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.487080 master-0 kubenswrapper[33013]: I0313 11:13:57.486986 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-config-data\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.487243 master-0 kubenswrapper[33013]: I0313 11:13:57.487066 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-config-data-custom\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.487243 master-0 kubenswrapper[33013]: I0313 11:13:57.487136 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx6kg\" (UniqueName: \"kubernetes.io/projected/616c0183-dbe2-4f18-8391-0d1772b7f375-kube-api-access-rx6kg\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.494537 master-0 kubenswrapper[33013]: I0313 11:13:57.493780 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf64fcfbc-zzxl9"] Mar 13 11:13:57.494537 master-0 kubenswrapper[33013]: I0313 11:13:57.494177 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" podUID="3d7d1181-e58a-41be-af8d-c209ff199f13" containerName="dnsmasq-dns" containerID="cri-o://2c9c7846275649177a123dec6a23389fc78b5cd20e22e947b8f23760da34c563" gracePeriod=10 Mar 13 11:13:57.495820 master-0 kubenswrapper[33013]: I0313 11:13:57.495771 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:57.515945 master-0 kubenswrapper[33013]: I0313 11:13:57.515862 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ceac4-volume-lvm-iscsi-0"] Mar 13 11:13:57.527867 master-0 kubenswrapper[33013]: I0313 11:13:57.527787 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.533180 master-0 kubenswrapper[33013]: I0313 11:13:57.533129 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ceac4-volume-lvm-iscsi-config-data" Mar 13 11:13:57.556200 master-0 kubenswrapper[33013]: I0313 11:13:57.546461 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-volume-lvm-iscsi-0"] Mar 13 11:13:57.577978 master-0 kubenswrapper[33013]: I0313 11:13:57.575195 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f9957b47c-z6jkk"] Mar 13 11:13:57.577978 master-0 kubenswrapper[33013]: I0313 11:13:57.577232 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.594519 master-0 kubenswrapper[33013]: I0313 11:13:57.592407 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/616c0183-dbe2-4f18-8391-0d1772b7f375-etc-machine-id\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.594519 master-0 kubenswrapper[33013]: I0313 11:13:57.592608 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-scripts\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.594519 master-0 kubenswrapper[33013]: I0313 11:13:57.592641 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-combined-ca-bundle\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.594519 master-0 kubenswrapper[33013]: I0313 11:13:57.592670 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-config-data\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.594519 master-0 kubenswrapper[33013]: I0313 11:13:57.592728 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-config-data-custom\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.594519 master-0 kubenswrapper[33013]: I0313 11:13:57.592762 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx6kg\" (UniqueName: \"kubernetes.io/projected/616c0183-dbe2-4f18-8391-0d1772b7f375-kube-api-access-rx6kg\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.594519 master-0 kubenswrapper[33013]: I0313 11:13:57.593255 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/616c0183-dbe2-4f18-8391-0d1772b7f375-etc-machine-id\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.598947 master-0 kubenswrapper[33013]: I0313 11:13:57.598902 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f9957b47c-z6jkk"] Mar 13 11:13:57.604630 master-0 kubenswrapper[33013]: I0313 11:13:57.602447 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-combined-ca-bundle\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.606892 master-0 kubenswrapper[33013]: I0313 11:13:57.606807 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-config-data-custom\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.607445 master-0 kubenswrapper[33013]: I0313 11:13:57.607409 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-scripts\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.611524 master-0 kubenswrapper[33013]: I0313 11:13:57.611456 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-config-data\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.621437 master-0 kubenswrapper[33013]: I0313 11:13:57.621368 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx6kg\" (UniqueName: \"kubernetes.io/projected/616c0183-dbe2-4f18-8391-0d1772b7f375-kube-api-access-rx6kg\") pod \"cinder-ceac4-scheduler-0\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.623999 master-0 kubenswrapper[33013]: I0313 11:13:57.623918 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ceac4-backup-0"] Mar 13 11:13:57.627703 master-0 kubenswrapper[33013]: I0313 11:13:57.626279 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.632698 master-0 kubenswrapper[33013]: I0313 11:13:57.632097 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ceac4-backup-config-data" Mar 13 11:13:57.688761 master-0 kubenswrapper[33013]: I0313 11:13:57.688693 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694318 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-lib-modules\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694376 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-lib-cinder\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694399 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-dns-svc\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694415 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-scripts\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694435 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-locks-cinder\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694457 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-sys\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694688 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-machine-id\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694711 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6xp2\" (UniqueName: \"kubernetes.io/projected/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-kube-api-access-c6xp2\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694733 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-dev\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694752 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-config\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694771 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-locks-cinder\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694790 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-nvme\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694810 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-combined-ca-bundle\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694847 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-nvme\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694866 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-iscsi\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.694838 master-0 kubenswrapper[33013]: I0313 11:13:57.694889 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-config-data\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.694913 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-machine-id\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.694938 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-lib-cinder\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.694958 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-config-data-custom\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.694977 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88p4r\" (UniqueName: \"kubernetes.io/projected/deda895d-6ed0-4306-85ad-3ea788b9d709-kube-api-access-88p4r\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695006 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-lib-modules\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695027 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-dns-swift-storage-0\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695046 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-ovsdbserver-sb\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695067 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-sys\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695090 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-dev\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695110 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-iscsi\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695142 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-locks-brick\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695169 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-combined-ca-bundle\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695191 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-ovsdbserver-nb\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695223 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-config-data\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695253 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-run\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695270 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-scripts\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695288 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp89h\" (UniqueName: \"kubernetes.io/projected/b4d9d572-4830-4c6f-aa61-e836816ec94b-kube-api-access-bp89h\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695301 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-run\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695321 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-locks-brick\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.700209 master-0 kubenswrapper[33013]: I0313 11:13:57.695343 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-config-data-custom\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.737540 master-0 kubenswrapper[33013]: I0313 11:13:57.735894 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-backup-0"] Mar 13 11:13:57.774906 master-0 kubenswrapper[33013]: I0313 11:13:57.766694 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ceac4-api-0"] Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.793291 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.798878 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-config-data\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.798935 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-machine-id\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.798970 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-lib-cinder\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.798990 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-config-data-custom\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799020 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88p4r\" (UniqueName: \"kubernetes.io/projected/deda895d-6ed0-4306-85ad-3ea788b9d709-kube-api-access-88p4r\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799037 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-lib-modules\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799057 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-dns-swift-storage-0\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799074 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-ovsdbserver-sb\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799097 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-sys\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799119 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-dev\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799137 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-iscsi\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799165 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-locks-brick\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799185 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-combined-ca-bundle\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799203 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-ovsdbserver-nb\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799236 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-config-data\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799269 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-run\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799284 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-scripts\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799302 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp89h\" (UniqueName: \"kubernetes.io/projected/b4d9d572-4830-4c6f-aa61-e836816ec94b-kube-api-access-bp89h\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799316 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-run\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799338 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-locks-brick\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799360 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-config-data-custom\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799392 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-lib-modules\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799851 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ceac4-api-config-data" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.799938 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-lib-cinder\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.800423 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-iscsi\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.800447 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-lib-modules\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.800598 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-dns-svc\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.800624 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-scripts\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.800640 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-lib-cinder\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.801631 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-locks-brick\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.805379 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-api-0"] Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.814383 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-config-data-custom\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.814759 master-0 kubenswrapper[33013]: I0313 11:13:57.814408 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-lib-modules\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.836700 master-0 kubenswrapper[33013]: I0313 11:13:57.814935 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-locks-brick\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.836700 master-0 kubenswrapper[33013]: I0313 11:13:57.819033 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-run\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.836700 master-0 kubenswrapper[33013]: I0313 11:13:57.819842 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-run\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.836700 master-0 kubenswrapper[33013]: I0313 11:13:57.819881 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-sys\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.836700 master-0 kubenswrapper[33013]: I0313 11:13:57.824993 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-dns-swift-storage-0\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.836700 master-0 kubenswrapper[33013]: I0313 11:13:57.825067 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-dev\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.836700 master-0 kubenswrapper[33013]: I0313 11:13:57.826302 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-ovsdbserver-nb\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.836700 master-0 kubenswrapper[33013]: I0313 11:13:57.836102 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-machine-id\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.837259 master-0 kubenswrapper[33013]: I0313 11:13:57.836800 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-dns-svc\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.837291 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-lib-cinder\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.837418 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-locks-cinder\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.837758 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-locks-cinder\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.837798 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-sys\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.837864 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-sys\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.837911 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-machine-id\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.837941 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6xp2\" (UniqueName: \"kubernetes.io/projected/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-kube-api-access-c6xp2\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.837968 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-dev\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.838009 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-config\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.838033 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-locks-cinder\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.838065 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-nvme\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.838098 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-combined-ca-bundle\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.838549 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-locks-cinder\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.839148 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-nvme\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.839294 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-config\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.840446 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-dev\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.840509 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-nvme\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.840994 master-0 kubenswrapper[33013]: I0313 11:13:57.840560 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-iscsi\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.847119 master-0 kubenswrapper[33013]: I0313 11:13:57.841196 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-nvme\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.847119 master-0 kubenswrapper[33013]: I0313 11:13:57.841255 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-machine-id\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.847119 master-0 kubenswrapper[33013]: I0313 11:13:57.841297 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-iscsi\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.847119 master-0 kubenswrapper[33013]: I0313 11:13:57.844863 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-scripts\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.847119 master-0 kubenswrapper[33013]: I0313 11:13:57.845335 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-config-data\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.853631 master-0 kubenswrapper[33013]: I0313 11:13:57.852533 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-combined-ca-bundle\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.862324 master-0 kubenswrapper[33013]: I0313 11:13:57.862178 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp89h\" (UniqueName: \"kubernetes.io/projected/b4d9d572-4830-4c6f-aa61-e836816ec94b-kube-api-access-bp89h\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.866646 master-0 kubenswrapper[33013]: I0313 11:13:57.864306 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88p4r\" (UniqueName: \"kubernetes.io/projected/deda895d-6ed0-4306-85ad-3ea788b9d709-kube-api-access-88p4r\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.874242 master-0 kubenswrapper[33013]: I0313 11:13:57.872094 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-ovsdbserver-sb\") pod \"dnsmasq-dns-f9957b47c-z6jkk\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:57.878667 master-0 kubenswrapper[33013]: I0313 11:13:57.878351 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6xp2\" (UniqueName: \"kubernetes.io/projected/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-kube-api-access-c6xp2\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.879450 master-0 kubenswrapper[33013]: I0313 11:13:57.879302 33013 generic.go:334] "Generic (PLEG): container finished" podID="3d7d1181-e58a-41be-af8d-c209ff199f13" containerID="2c9c7846275649177a123dec6a23389fc78b5cd20e22e947b8f23760da34c563" exitCode=0 Mar 13 11:13:57.879640 master-0 kubenswrapper[33013]: I0313 11:13:57.879516 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-scripts\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.879640 master-0 kubenswrapper[33013]: I0313 11:13:57.879561 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" event={"ID":"3d7d1181-e58a-41be-af8d-c209ff199f13","Type":"ContainerDied","Data":"2c9c7846275649177a123dec6a23389fc78b5cd20e22e947b8f23760da34c563"} Mar 13 11:13:57.886132 master-0 kubenswrapper[33013]: I0313 11:13:57.885705 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-combined-ca-bundle\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.912683 master-0 kubenswrapper[33013]: I0313 11:13:57.912621 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:13:57.916039 master-0 kubenswrapper[33013]: I0313 11:13:57.915460 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-config-data-custom\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.918643 master-0 kubenswrapper[33013]: I0313 11:13:57.918569 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-config-data\") pod \"cinder-ceac4-backup-0\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:57.943797 master-0 kubenswrapper[33013]: I0313 11:13:57.943738 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38e42a3b-c76c-4851-9664-a4c351756810-etc-machine-id\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:57.944061 master-0 kubenswrapper[33013]: I0313 11:13:57.943826 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-scripts\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:57.944061 master-0 kubenswrapper[33013]: I0313 11:13:57.943896 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e42a3b-c76c-4851-9664-a4c351756810-logs\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:57.944061 master-0 kubenswrapper[33013]: I0313 11:13:57.943920 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m72dw\" (UniqueName: \"kubernetes.io/projected/38e42a3b-c76c-4851-9664-a4c351756810-kube-api-access-m72dw\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:57.944061 master-0 kubenswrapper[33013]: I0313 11:13:57.943967 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-config-data\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:57.944061 master-0 kubenswrapper[33013]: I0313 11:13:57.944020 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-combined-ca-bundle\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:57.944234 master-0 kubenswrapper[33013]: I0313 11:13:57.944072 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-config-data-custom\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:57.976948 master-0 kubenswrapper[33013]: I0313 11:13:57.976878 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:13:58.022773 master-0 kubenswrapper[33013]: I0313 11:13:58.018166 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-backup-0" Mar 13 11:13:58.060718 master-0 kubenswrapper[33013]: I0313 11:13:58.060017 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38e42a3b-c76c-4851-9664-a4c351756810-etc-machine-id\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.060718 master-0 kubenswrapper[33013]: I0313 11:13:58.060160 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-scripts\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.060718 master-0 kubenswrapper[33013]: I0313 11:13:58.060275 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e42a3b-c76c-4851-9664-a4c351756810-logs\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.060718 master-0 kubenswrapper[33013]: I0313 11:13:58.060324 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m72dw\" (UniqueName: \"kubernetes.io/projected/38e42a3b-c76c-4851-9664-a4c351756810-kube-api-access-m72dw\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.060718 master-0 kubenswrapper[33013]: I0313 11:13:58.060437 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-config-data\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.060718 master-0 kubenswrapper[33013]: I0313 11:13:58.060560 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-combined-ca-bundle\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.063271 master-0 kubenswrapper[33013]: I0313 11:13:58.060968 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-config-data-custom\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.066031 master-0 kubenswrapper[33013]: I0313 11:13:58.065349 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38e42a3b-c76c-4851-9664-a4c351756810-etc-machine-id\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.068707 master-0 kubenswrapper[33013]: I0313 11:13:58.068660 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e42a3b-c76c-4851-9664-a4c351756810-logs\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.080673 master-0 kubenswrapper[33013]: I0313 11:13:58.080577 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-scripts\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.091234 master-0 kubenswrapper[33013]: I0313 11:13:58.085323 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-combined-ca-bundle\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.091234 master-0 kubenswrapper[33013]: I0313 11:13:58.089761 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-config-data\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.117711 master-0 kubenswrapper[33013]: I0313 11:13:58.114841 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m72dw\" (UniqueName: \"kubernetes.io/projected/38e42a3b-c76c-4851-9664-a4c351756810-kube-api-access-m72dw\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.159776 master-0 kubenswrapper[33013]: I0313 11:13:58.152545 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-config-data-custom\") pod \"cinder-ceac4-api-0\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.289178 master-0 kubenswrapper[33013]: I0313 11:13:58.284842 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:58.380809 master-0 kubenswrapper[33013]: I0313 11:13:58.376373 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-api-0" Mar 13 11:13:58.390552 master-0 kubenswrapper[33013]: I0313 11:13:58.389452 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-dns-swift-storage-0\") pod \"3d7d1181-e58a-41be-af8d-c209ff199f13\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " Mar 13 11:13:58.390552 master-0 kubenswrapper[33013]: I0313 11:13:58.389630 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n6l4\" (UniqueName: \"kubernetes.io/projected/3d7d1181-e58a-41be-af8d-c209ff199f13-kube-api-access-5n6l4\") pod \"3d7d1181-e58a-41be-af8d-c209ff199f13\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " Mar 13 11:13:58.390552 master-0 kubenswrapper[33013]: I0313 11:13:58.389697 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-config\") pod \"3d7d1181-e58a-41be-af8d-c209ff199f13\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " Mar 13 11:13:58.390552 master-0 kubenswrapper[33013]: I0313 11:13:58.389872 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-dns-svc\") pod \"3d7d1181-e58a-41be-af8d-c209ff199f13\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " Mar 13 11:13:58.390552 master-0 kubenswrapper[33013]: I0313 11:13:58.389930 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-ovsdbserver-nb\") pod \"3d7d1181-e58a-41be-af8d-c209ff199f13\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " Mar 13 11:13:58.390552 master-0 kubenswrapper[33013]: I0313 11:13:58.390042 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-ovsdbserver-sb\") pod \"3d7d1181-e58a-41be-af8d-c209ff199f13\" (UID: \"3d7d1181-e58a-41be-af8d-c209ff199f13\") " Mar 13 11:13:58.424189 master-0 kubenswrapper[33013]: I0313 11:13:58.424124 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d7d1181-e58a-41be-af8d-c209ff199f13-kube-api-access-5n6l4" (OuterVolumeSpecName: "kube-api-access-5n6l4") pod "3d7d1181-e58a-41be-af8d-c209ff199f13" (UID: "3d7d1181-e58a-41be-af8d-c209ff199f13"). InnerVolumeSpecName "kube-api-access-5n6l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:13:58.445616 master-0 kubenswrapper[33013]: I0313 11:13:58.428164 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-scheduler-0"] Mar 13 11:13:58.509873 master-0 kubenswrapper[33013]: I0313 11:13:58.499560 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n6l4\" (UniqueName: \"kubernetes.io/projected/3d7d1181-e58a-41be-af8d-c209ff199f13-kube-api-access-5n6l4\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:58.621648 master-0 kubenswrapper[33013]: I0313 11:13:58.617711 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3d7d1181-e58a-41be-af8d-c209ff199f13" (UID: "3d7d1181-e58a-41be-af8d-c209ff199f13"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:58.662636 master-0 kubenswrapper[33013]: I0313 11:13:58.661324 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3d7d1181-e58a-41be-af8d-c209ff199f13" (UID: "3d7d1181-e58a-41be-af8d-c209ff199f13"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:58.669478 master-0 kubenswrapper[33013]: I0313 11:13:58.665643 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-config" (OuterVolumeSpecName: "config") pod "3d7d1181-e58a-41be-af8d-c209ff199f13" (UID: "3d7d1181-e58a-41be-af8d-c209ff199f13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:58.669478 master-0 kubenswrapper[33013]: I0313 11:13:58.666145 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d7d1181-e58a-41be-af8d-c209ff199f13" (UID: "3d7d1181-e58a-41be-af8d-c209ff199f13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:58.669478 master-0 kubenswrapper[33013]: I0313 11:13:58.668720 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3d7d1181-e58a-41be-af8d-c209ff199f13" (UID: "3d7d1181-e58a-41be-af8d-c209ff199f13"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:13:58.736662 master-0 kubenswrapper[33013]: I0313 11:13:58.726107 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:58.736662 master-0 kubenswrapper[33013]: I0313 11:13:58.726146 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:58.736662 master-0 kubenswrapper[33013]: I0313 11:13:58.726158 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:58.736662 master-0 kubenswrapper[33013]: I0313 11:13:58.726170 33013 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:58.736662 master-0 kubenswrapper[33013]: I0313 11:13:58.726179 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d7d1181-e58a-41be-af8d-c209ff199f13-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:13:58.935197 master-0 kubenswrapper[33013]: I0313 11:13:58.935101 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-volume-lvm-iscsi-0"] Mar 13 11:13:59.033470 master-0 kubenswrapper[33013]: I0313 11:13:59.033417 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" Mar 13 11:13:59.034436 master-0 kubenswrapper[33013]: I0313 11:13:59.034395 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf64fcfbc-zzxl9" event={"ID":"3d7d1181-e58a-41be-af8d-c209ff199f13","Type":"ContainerDied","Data":"749591acab914c8606ff2c09684fc446f49940d251b6a3d77ed560fec455bf81"} Mar 13 11:13:59.034524 master-0 kubenswrapper[33013]: I0313 11:13:59.034443 33013 scope.go:117] "RemoveContainer" containerID="2c9c7846275649177a123dec6a23389fc78b5cd20e22e947b8f23760da34c563" Mar 13 11:13:59.048883 master-0 kubenswrapper[33013]: I0313 11:13:59.046670 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-scheduler-0" event={"ID":"616c0183-dbe2-4f18-8391-0d1772b7f375","Type":"ContainerStarted","Data":"d367b194786e47261bdd0eddf10193cac82b4d89a67ec44e99657fad21dcdecd"} Mar 13 11:13:59.048883 master-0 kubenswrapper[33013]: I0313 11:13:59.048377 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" event={"ID":"b4d9d572-4830-4c6f-aa61-e836816ec94b","Type":"ContainerStarted","Data":"bc8d0a9fff51eb4044acdcdfcb84483081392991400e48cd89d7b38c7525514c"} Mar 13 11:13:59.108616 master-0 kubenswrapper[33013]: I0313 11:13:59.107725 33013 scope.go:117] "RemoveContainer" containerID="de73f389128b0feb349e451e751dd83b09fae78d8db8dff37b78a347aa965355" Mar 13 11:13:59.120940 master-0 kubenswrapper[33013]: W0313 11:13:59.120859 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddeda895d_6ed0_4306_85ad_3ea788b9d709.slice/crio-7083b1612fd4492dff64deba2462199cff9c0ae9d5543d10e8a79747b9a6c58e WatchSource:0}: Error finding container 7083b1612fd4492dff64deba2462199cff9c0ae9d5543d10e8a79747b9a6c58e: Status 404 returned error can't find the container with id 7083b1612fd4492dff64deba2462199cff9c0ae9d5543d10e8a79747b9a6c58e Mar 13 11:13:59.133576 master-0 kubenswrapper[33013]: I0313 11:13:59.133521 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f9957b47c-z6jkk"] Mar 13 11:13:59.248183 master-0 kubenswrapper[33013]: I0313 11:13:59.248115 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf64fcfbc-zzxl9"] Mar 13 11:13:59.323746 master-0 kubenswrapper[33013]: I0313 11:13:59.322800 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cf64fcfbc-zzxl9"] Mar 13 11:13:59.391629 master-0 kubenswrapper[33013]: I0313 11:13:59.388683 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-backup-0"] Mar 13 11:13:59.526173 master-0 kubenswrapper[33013]: I0313 11:13:59.525865 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-api-0"] Mar 13 11:14:00.066916 master-0 kubenswrapper[33013]: I0313 11:14:00.066851 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-api-0" event={"ID":"38e42a3b-c76c-4851-9664-a4c351756810","Type":"ContainerStarted","Data":"ca8dbf66f2aebf5547f79cf6d840edc2c13b058ad9d743b2e2873b6d35a5f6fa"} Mar 13 11:14:00.069948 master-0 kubenswrapper[33013]: I0313 11:14:00.069882 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-backup-0" event={"ID":"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8","Type":"ContainerStarted","Data":"b43285cc6e42a3616d43badbd9e621a1e7db93d21c4001ad35764f85c9a54596"} Mar 13 11:14:00.078900 master-0 kubenswrapper[33013]: I0313 11:14:00.078854 33013 generic.go:334] "Generic (PLEG): container finished" podID="deda895d-6ed0-4306-85ad-3ea788b9d709" containerID="55507216c8ec51ba82d1f9f159881e8cb4fe29ebca1437e1c615d859d271de6d" exitCode=0 Mar 13 11:14:00.078993 master-0 kubenswrapper[33013]: I0313 11:14:00.078906 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" event={"ID":"deda895d-6ed0-4306-85ad-3ea788b9d709","Type":"ContainerDied","Data":"55507216c8ec51ba82d1f9f159881e8cb4fe29ebca1437e1c615d859d271de6d"} Mar 13 11:14:00.078993 master-0 kubenswrapper[33013]: I0313 11:14:00.078933 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" event={"ID":"deda895d-6ed0-4306-85ad-3ea788b9d709","Type":"ContainerStarted","Data":"7083b1612fd4492dff64deba2462199cff9c0ae9d5543d10e8a79747b9a6c58e"} Mar 13 11:14:00.435695 master-0 kubenswrapper[33013]: I0313 11:14:00.435624 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ceac4-api-0"] Mar 13 11:14:00.742815 master-0 kubenswrapper[33013]: I0313 11:14:00.742739 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d7d1181-e58a-41be-af8d-c209ff199f13" path="/var/lib/kubelet/pods/3d7d1181-e58a-41be-af8d-c209ff199f13/volumes" Mar 13 11:14:01.114289 master-0 kubenswrapper[33013]: I0313 11:14:01.113913 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-backup-0" event={"ID":"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8","Type":"ContainerStarted","Data":"1dce2530df1b96681748551a56a0d2149e701f021d5e492b94854bf68b6cb561"} Mar 13 11:14:01.128976 master-0 kubenswrapper[33013]: I0313 11:14:01.128852 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" event={"ID":"b4d9d572-4830-4c6f-aa61-e836816ec94b","Type":"ContainerStarted","Data":"8e5661b2a39e4da32a8d1b9522f5dc22eb5bdd6b17f475d0ac62686eedb5dcd8"} Mar 13 11:14:01.128976 master-0 kubenswrapper[33013]: I0313 11:14:01.128922 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" event={"ID":"b4d9d572-4830-4c6f-aa61-e836816ec94b","Type":"ContainerStarted","Data":"92136a8ac6376648da2845880d840c334bd57ffb3374dec57d9521620552252f"} Mar 13 11:14:01.142506 master-0 kubenswrapper[33013]: I0313 11:14:01.142429 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" event={"ID":"deda895d-6ed0-4306-85ad-3ea788b9d709","Type":"ContainerStarted","Data":"8a7a92e579825dd014ed899494c8be7c4e2f5a76fad49addfb55886726f358c6"} Mar 13 11:14:01.144767 master-0 kubenswrapper[33013]: I0313 11:14:01.144721 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:14:01.174773 master-0 kubenswrapper[33013]: I0313 11:14:01.172920 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-scheduler-0" event={"ID":"616c0183-dbe2-4f18-8391-0d1772b7f375","Type":"ContainerStarted","Data":"db9540b84c4bcd5308f09f6b463baf2bfbd12f01b6005000f33352eec609eaab"} Mar 13 11:14:01.190339 master-0 kubenswrapper[33013]: I0313 11:14:01.190253 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" podStartSLOduration=3.174522283 podStartE2EDuration="4.190229397s" podCreationTimestamp="2026-03-13 11:13:57 +0000 UTC" firstStartedPulling="2026-03-13 11:13:58.956275835 +0000 UTC m=+1022.432229184" lastFinishedPulling="2026-03-13 11:13:59.971982949 +0000 UTC m=+1023.447936298" observedRunningTime="2026-03-13 11:14:01.15922259 +0000 UTC m=+1024.635175949" watchObservedRunningTime="2026-03-13 11:14:01.190229397 +0000 UTC m=+1024.666182746" Mar 13 11:14:01.197613 master-0 kubenswrapper[33013]: I0313 11:14:01.197540 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-api-0" event={"ID":"38e42a3b-c76c-4851-9664-a4c351756810","Type":"ContainerStarted","Data":"9ec8a97cc52d7f21ac72b8dc746b2e0de311feec67514ee43f02603c62f7d9e1"} Mar 13 11:14:01.230906 master-0 kubenswrapper[33013]: I0313 11:14:01.228254 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" podStartSLOduration=4.228222434 podStartE2EDuration="4.228222434s" podCreationTimestamp="2026-03-13 11:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:14:01.20605424 +0000 UTC m=+1024.682007589" watchObservedRunningTime="2026-03-13 11:14:01.228222434 +0000 UTC m=+1024.704175793" Mar 13 11:14:02.261721 master-0 kubenswrapper[33013]: I0313 11:14:02.260986 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-scheduler-0" event={"ID":"616c0183-dbe2-4f18-8391-0d1772b7f375","Type":"ContainerStarted","Data":"0abc6675ab4a8fb48fac7693d560dabe62037ba9ff706a87f79e0e6be3a17f9c"} Mar 13 11:14:02.277047 master-0 kubenswrapper[33013]: I0313 11:14:02.275442 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-api-0" event={"ID":"38e42a3b-c76c-4851-9664-a4c351756810","Type":"ContainerStarted","Data":"3692b0584989563d0422d037f87c5d4b9d67a1374feee6226d3662560cc4d392"} Mar 13 11:14:02.277047 master-0 kubenswrapper[33013]: I0313 11:14:02.275666 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ceac4-api-0" podUID="38e42a3b-c76c-4851-9664-a4c351756810" containerName="cinder-ceac4-api-log" containerID="cri-o://9ec8a97cc52d7f21ac72b8dc746b2e0de311feec67514ee43f02603c62f7d9e1" gracePeriod=30 Mar 13 11:14:02.277047 master-0 kubenswrapper[33013]: I0313 11:14:02.276041 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:02.277047 master-0 kubenswrapper[33013]: I0313 11:14:02.276082 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ceac4-api-0" podUID="38e42a3b-c76c-4851-9664-a4c351756810" containerName="cinder-api" containerID="cri-o://3692b0584989563d0422d037f87c5d4b9d67a1374feee6226d3662560cc4d392" gracePeriod=30 Mar 13 11:14:02.327742 master-0 kubenswrapper[33013]: I0313 11:14:02.327681 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-backup-0" event={"ID":"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8","Type":"ContainerStarted","Data":"f4afc239bfc8e18d4e3ef40f2181f5ebe74b92387a5f498b902a27284eba8bf3"} Mar 13 11:14:02.381633 master-0 kubenswrapper[33013]: I0313 11:14:02.379250 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ceac4-scheduler-0" podStartSLOduration=4.367557111 podStartE2EDuration="5.379223409s" podCreationTimestamp="2026-03-13 11:13:57 +0000 UTC" firstStartedPulling="2026-03-13 11:13:58.534243479 +0000 UTC m=+1022.010196828" lastFinishedPulling="2026-03-13 11:13:59.545909777 +0000 UTC m=+1023.021863126" observedRunningTime="2026-03-13 11:14:02.316208586 +0000 UTC m=+1025.792161935" watchObservedRunningTime="2026-03-13 11:14:02.379223409 +0000 UTC m=+1025.855176768" Mar 13 11:14:02.381633 master-0 kubenswrapper[33013]: I0313 11:14:02.381217 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ceac4-api-0" podStartSLOduration=5.3811992459999995 podStartE2EDuration="5.381199246s" podCreationTimestamp="2026-03-13 11:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:14:02.344153186 +0000 UTC m=+1025.820106545" watchObservedRunningTime="2026-03-13 11:14:02.381199246 +0000 UTC m=+1025.857152595" Mar 13 11:14:02.465790 master-0 kubenswrapper[33013]: I0313 11:14:02.465673 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ceac4-backup-0" podStartSLOduration=4.455350484 podStartE2EDuration="5.465640082s" podCreationTimestamp="2026-03-13 11:13:57 +0000 UTC" firstStartedPulling="2026-03-13 11:13:59.483616725 +0000 UTC m=+1022.959570074" lastFinishedPulling="2026-03-13 11:14:00.493906323 +0000 UTC m=+1023.969859672" observedRunningTime="2026-03-13 11:14:02.382557445 +0000 UTC m=+1025.858510794" watchObservedRunningTime="2026-03-13 11:14:02.465640082 +0000 UTC m=+1025.941593441" Mar 13 11:14:02.690018 master-0 kubenswrapper[33013]: I0313 11:14:02.689935 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:02.914107 master-0 kubenswrapper[33013]: I0313 11:14:02.913461 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:03.024432 master-0 kubenswrapper[33013]: I0313 11:14:03.019467 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:03.349629 master-0 kubenswrapper[33013]: I0313 11:14:03.345557 33013 generic.go:334] "Generic (PLEG): container finished" podID="38e42a3b-c76c-4851-9664-a4c351756810" containerID="3692b0584989563d0422d037f87c5d4b9d67a1374feee6226d3662560cc4d392" exitCode=0 Mar 13 11:14:03.349629 master-0 kubenswrapper[33013]: I0313 11:14:03.345805 33013 generic.go:334] "Generic (PLEG): container finished" podID="38e42a3b-c76c-4851-9664-a4c351756810" containerID="9ec8a97cc52d7f21ac72b8dc746b2e0de311feec67514ee43f02603c62f7d9e1" exitCode=143 Mar 13 11:14:03.349629 master-0 kubenswrapper[33013]: I0313 11:14:03.346841 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-api-0" event={"ID":"38e42a3b-c76c-4851-9664-a4c351756810","Type":"ContainerDied","Data":"3692b0584989563d0422d037f87c5d4b9d67a1374feee6226d3662560cc4d392"} Mar 13 11:14:03.349629 master-0 kubenswrapper[33013]: I0313 11:14:03.346954 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-api-0" event={"ID":"38e42a3b-c76c-4851-9664-a4c351756810","Type":"ContainerDied","Data":"9ec8a97cc52d7f21ac72b8dc746b2e0de311feec67514ee43f02603c62f7d9e1"} Mar 13 11:14:03.349629 master-0 kubenswrapper[33013]: I0313 11:14:03.346976 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-api-0" event={"ID":"38e42a3b-c76c-4851-9664-a4c351756810","Type":"ContainerDied","Data":"ca8dbf66f2aebf5547f79cf6d840edc2c13b058ad9d743b2e2873b6d35a5f6fa"} Mar 13 11:14:03.349629 master-0 kubenswrapper[33013]: I0313 11:14:03.346987 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca8dbf66f2aebf5547f79cf6d840edc2c13b058ad9d743b2e2873b6d35a5f6fa" Mar 13 11:14:03.398101 master-0 kubenswrapper[33013]: I0313 11:14:03.398054 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:03.561349 master-0 kubenswrapper[33013]: I0313 11:14:03.561267 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-config-data\") pod \"38e42a3b-c76c-4851-9664-a4c351756810\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " Mar 13 11:14:03.561349 master-0 kubenswrapper[33013]: I0313 11:14:03.561332 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-scripts\") pod \"38e42a3b-c76c-4851-9664-a4c351756810\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " Mar 13 11:14:03.561721 master-0 kubenswrapper[33013]: I0313 11:14:03.561411 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e42a3b-c76c-4851-9664-a4c351756810-logs\") pod \"38e42a3b-c76c-4851-9664-a4c351756810\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " Mar 13 11:14:03.561721 master-0 kubenswrapper[33013]: I0313 11:14:03.561459 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m72dw\" (UniqueName: \"kubernetes.io/projected/38e42a3b-c76c-4851-9664-a4c351756810-kube-api-access-m72dw\") pod \"38e42a3b-c76c-4851-9664-a4c351756810\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " Mar 13 11:14:03.561721 master-0 kubenswrapper[33013]: I0313 11:14:03.561621 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-combined-ca-bundle\") pod \"38e42a3b-c76c-4851-9664-a4c351756810\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " Mar 13 11:14:03.561721 master-0 kubenswrapper[33013]: I0313 11:14:03.561673 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38e42a3b-c76c-4851-9664-a4c351756810-etc-machine-id\") pod \"38e42a3b-c76c-4851-9664-a4c351756810\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " Mar 13 11:14:03.561959 master-0 kubenswrapper[33013]: I0313 11:14:03.561907 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-config-data-custom\") pod \"38e42a3b-c76c-4851-9664-a4c351756810\" (UID: \"38e42a3b-c76c-4851-9664-a4c351756810\") " Mar 13 11:14:03.564056 master-0 kubenswrapper[33013]: I0313 11:14:03.564014 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38e42a3b-c76c-4851-9664-a4c351756810-logs" (OuterVolumeSpecName: "logs") pod "38e42a3b-c76c-4851-9664-a4c351756810" (UID: "38e42a3b-c76c-4851-9664-a4c351756810"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:14:03.564785 master-0 kubenswrapper[33013]: I0313 11:14:03.564721 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38e42a3b-c76c-4851-9664-a4c351756810-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "38e42a3b-c76c-4851-9664-a4c351756810" (UID: "38e42a3b-c76c-4851-9664-a4c351756810"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:03.568652 master-0 kubenswrapper[33013]: I0313 11:14:03.568495 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "38e42a3b-c76c-4851-9664-a4c351756810" (UID: "38e42a3b-c76c-4851-9664-a4c351756810"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:03.585042 master-0 kubenswrapper[33013]: I0313 11:14:03.584956 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-scripts" (OuterVolumeSpecName: "scripts") pod "38e42a3b-c76c-4851-9664-a4c351756810" (UID: "38e42a3b-c76c-4851-9664-a4c351756810"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:03.603087 master-0 kubenswrapper[33013]: I0313 11:14:03.602841 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e42a3b-c76c-4851-9664-a4c351756810-kube-api-access-m72dw" (OuterVolumeSpecName: "kube-api-access-m72dw") pod "38e42a3b-c76c-4851-9664-a4c351756810" (UID: "38e42a3b-c76c-4851-9664-a4c351756810"). InnerVolumeSpecName "kube-api-access-m72dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:03.619933 master-0 kubenswrapper[33013]: I0313 11:14:03.619273 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38e42a3b-c76c-4851-9664-a4c351756810" (UID: "38e42a3b-c76c-4851-9664-a4c351756810"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:03.638886 master-0 kubenswrapper[33013]: I0313 11:14:03.638813 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-config-data" (OuterVolumeSpecName: "config-data") pod "38e42a3b-c76c-4851-9664-a4c351756810" (UID: "38e42a3b-c76c-4851-9664-a4c351756810"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:03.679469 master-0 kubenswrapper[33013]: I0313 11:14:03.679335 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:03.679469 master-0 kubenswrapper[33013]: I0313 11:14:03.679384 33013 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38e42a3b-c76c-4851-9664-a4c351756810-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:03.679469 master-0 kubenswrapper[33013]: I0313 11:14:03.679395 33013 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:03.679469 master-0 kubenswrapper[33013]: I0313 11:14:03.679407 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:03.679469 master-0 kubenswrapper[33013]: I0313 11:14:03.679415 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e42a3b-c76c-4851-9664-a4c351756810-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:03.679469 master-0 kubenswrapper[33013]: I0313 11:14:03.679423 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e42a3b-c76c-4851-9664-a4c351756810-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:03.679469 master-0 kubenswrapper[33013]: I0313 11:14:03.679433 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m72dw\" (UniqueName: \"kubernetes.io/projected/38e42a3b-c76c-4851-9664-a4c351756810-kube-api-access-m72dw\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:04.360161 master-0 kubenswrapper[33013]: I0313 11:14:04.360110 33013 generic.go:334] "Generic (PLEG): container finished" podID="1bf1cd3c-9327-4a27-aaee-20da3d6111f1" containerID="c113f3406a1c342fd1f464e66bec3ef02c626a4b37e1ce92433b2cf7cf2ef162" exitCode=0 Mar 13 11:14:04.361119 master-0 kubenswrapper[33013]: I0313 11:14:04.360271 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.361119 master-0 kubenswrapper[33013]: I0313 11:14:04.360408 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-s6b9s" event={"ID":"1bf1cd3c-9327-4a27-aaee-20da3d6111f1","Type":"ContainerDied","Data":"c113f3406a1c342fd1f464e66bec3ef02c626a4b37e1ce92433b2cf7cf2ef162"} Mar 13 11:14:04.474771 master-0 kubenswrapper[33013]: I0313 11:14:04.474580 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ceac4-api-0"] Mar 13 11:14:04.509659 master-0 kubenswrapper[33013]: I0313 11:14:04.507806 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ceac4-api-0"] Mar 13 11:14:04.528734 master-0 kubenswrapper[33013]: I0313 11:14:04.528680 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ceac4-api-0"] Mar 13 11:14:04.529515 master-0 kubenswrapper[33013]: E0313 11:14:04.529464 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7d1181-e58a-41be-af8d-c209ff199f13" containerName="init" Mar 13 11:14:04.529614 master-0 kubenswrapper[33013]: I0313 11:14:04.529602 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7d1181-e58a-41be-af8d-c209ff199f13" containerName="init" Mar 13 11:14:04.529752 master-0 kubenswrapper[33013]: E0313 11:14:04.529740 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e42a3b-c76c-4851-9664-a4c351756810" containerName="cinder-api" Mar 13 11:14:04.529822 master-0 kubenswrapper[33013]: I0313 11:14:04.529812 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e42a3b-c76c-4851-9664-a4c351756810" containerName="cinder-api" Mar 13 11:14:04.530002 master-0 kubenswrapper[33013]: E0313 11:14:04.529990 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7d1181-e58a-41be-af8d-c209ff199f13" containerName="dnsmasq-dns" Mar 13 11:14:04.530077 master-0 kubenswrapper[33013]: I0313 11:14:04.530066 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7d1181-e58a-41be-af8d-c209ff199f13" containerName="dnsmasq-dns" Mar 13 11:14:04.530157 master-0 kubenswrapper[33013]: E0313 11:14:04.530146 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e42a3b-c76c-4851-9664-a4c351756810" containerName="cinder-ceac4-api-log" Mar 13 11:14:04.530220 master-0 kubenswrapper[33013]: I0313 11:14:04.530209 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e42a3b-c76c-4851-9664-a4c351756810" containerName="cinder-ceac4-api-log" Mar 13 11:14:04.530610 master-0 kubenswrapper[33013]: I0313 11:14:04.530580 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e42a3b-c76c-4851-9664-a4c351756810" containerName="cinder-api" Mar 13 11:14:04.530694 master-0 kubenswrapper[33013]: I0313 11:14:04.530683 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e42a3b-c76c-4851-9664-a4c351756810" containerName="cinder-ceac4-api-log" Mar 13 11:14:04.530764 master-0 kubenswrapper[33013]: I0313 11:14:04.530754 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d7d1181-e58a-41be-af8d-c209ff199f13" containerName="dnsmasq-dns" Mar 13 11:14:04.532112 master-0 kubenswrapper[33013]: I0313 11:14:04.532043 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.535146 master-0 kubenswrapper[33013]: I0313 11:14:04.535125 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 13 11:14:04.535837 master-0 kubenswrapper[33013]: I0313 11:14:04.535822 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 13 11:14:04.536016 master-0 kubenswrapper[33013]: I0313 11:14:04.535846 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ceac4-api-config-data" Mar 13 11:14:04.566120 master-0 kubenswrapper[33013]: I0313 11:14:04.563564 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-api-0"] Mar 13 11:14:04.719443 master-0 kubenswrapper[33013]: I0313 11:14:04.717946 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d97eda7-c4c6-42f3-bb49-12824d113ea9-logs\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.719443 master-0 kubenswrapper[33013]: I0313 11:14:04.718159 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-public-tls-certs\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.719443 master-0 kubenswrapper[33013]: I0313 11:14:04.718243 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d97eda7-c4c6-42f3-bb49-12824d113ea9-etc-machine-id\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.719443 master-0 kubenswrapper[33013]: I0313 11:14:04.718463 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-combined-ca-bundle\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.719443 master-0 kubenswrapper[33013]: I0313 11:14:04.718647 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-config-data-custom\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.719443 master-0 kubenswrapper[33013]: I0313 11:14:04.718722 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-scripts\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.719443 master-0 kubenswrapper[33013]: I0313 11:14:04.718855 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-internal-tls-certs\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.719443 master-0 kubenswrapper[33013]: I0313 11:14:04.718918 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbqvv\" (UniqueName: \"kubernetes.io/projected/6d97eda7-c4c6-42f3-bb49-12824d113ea9-kube-api-access-sbqvv\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.719443 master-0 kubenswrapper[33013]: I0313 11:14:04.719071 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-config-data\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.731054 master-0 kubenswrapper[33013]: I0313 11:14:04.730955 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e42a3b-c76c-4851-9664-a4c351756810" path="/var/lib/kubelet/pods/38e42a3b-c76c-4851-9664-a4c351756810/volumes" Mar 13 11:14:04.825461 master-0 kubenswrapper[33013]: I0313 11:14:04.825409 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-public-tls-certs\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.825782 master-0 kubenswrapper[33013]: I0313 11:14:04.825764 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d97eda7-c4c6-42f3-bb49-12824d113ea9-etc-machine-id\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.826025 master-0 kubenswrapper[33013]: I0313 11:14:04.825944 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d97eda7-c4c6-42f3-bb49-12824d113ea9-etc-machine-id\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.826307 master-0 kubenswrapper[33013]: I0313 11:14:04.826289 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-combined-ca-bundle\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.827283 master-0 kubenswrapper[33013]: I0313 11:14:04.827262 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-config-data-custom\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.829879 master-0 kubenswrapper[33013]: I0313 11:14:04.829857 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-scripts\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.830080 master-0 kubenswrapper[33013]: I0313 11:14:04.830064 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-internal-tls-certs\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.830198 master-0 kubenswrapper[33013]: I0313 11:14:04.830185 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbqvv\" (UniqueName: \"kubernetes.io/projected/6d97eda7-c4c6-42f3-bb49-12824d113ea9-kube-api-access-sbqvv\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.830335 master-0 kubenswrapper[33013]: I0313 11:14:04.830322 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-config-data\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.830528 master-0 kubenswrapper[33013]: I0313 11:14:04.830515 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d97eda7-c4c6-42f3-bb49-12824d113ea9-logs\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.836394 master-0 kubenswrapper[33013]: I0313 11:14:04.834432 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d97eda7-c4c6-42f3-bb49-12824d113ea9-logs\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.852759 master-0 kubenswrapper[33013]: I0313 11:14:04.851828 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbqvv\" (UniqueName: \"kubernetes.io/projected/6d97eda7-c4c6-42f3-bb49-12824d113ea9-kube-api-access-sbqvv\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.854558 master-0 kubenswrapper[33013]: I0313 11:14:04.854526 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-combined-ca-bundle\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.854706 master-0 kubenswrapper[33013]: I0313 11:14:04.854563 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-public-tls-certs\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.854846 master-0 kubenswrapper[33013]: I0313 11:14:04.854803 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-config-data\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.855944 master-0 kubenswrapper[33013]: I0313 11:14:04.855379 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-config-data-custom\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.864253 master-0 kubenswrapper[33013]: I0313 11:14:04.864209 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-internal-tls-certs\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:04.865876 master-0 kubenswrapper[33013]: I0313 11:14:04.865822 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d97eda7-c4c6-42f3-bb49-12824d113ea9-scripts\") pod \"cinder-ceac4-api-0\" (UID: \"6d97eda7-c4c6-42f3-bb49-12824d113ea9\") " pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:05.159182 master-0 kubenswrapper[33013]: I0313 11:14:05.159109 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:05.725496 master-0 kubenswrapper[33013]: I0313 11:14:05.722434 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-api-0"] Mar 13 11:14:06.040253 master-0 kubenswrapper[33013]: I0313 11:14:06.040120 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:14:06.172239 master-0 kubenswrapper[33013]: I0313 11:14:06.171713 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k2mb\" (UniqueName: \"kubernetes.io/projected/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-kube-api-access-7k2mb\") pod \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " Mar 13 11:14:06.172239 master-0 kubenswrapper[33013]: I0313 11:14:06.171840 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-etc-podinfo\") pod \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " Mar 13 11:14:06.172239 master-0 kubenswrapper[33013]: I0313 11:14:06.171926 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-config-data\") pod \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " Mar 13 11:14:06.172239 master-0 kubenswrapper[33013]: I0313 11:14:06.172124 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-combined-ca-bundle\") pod \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " Mar 13 11:14:06.172239 master-0 kubenswrapper[33013]: I0313 11:14:06.172237 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-config-data-merged\") pod \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " Mar 13 11:14:06.172239 master-0 kubenswrapper[33013]: I0313 11:14:06.172258 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-scripts\") pod \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\" (UID: \"1bf1cd3c-9327-4a27-aaee-20da3d6111f1\") " Mar 13 11:14:06.178641 master-0 kubenswrapper[33013]: I0313 11:14:06.178477 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-scripts" (OuterVolumeSpecName: "scripts") pod "1bf1cd3c-9327-4a27-aaee-20da3d6111f1" (UID: "1bf1cd3c-9327-4a27-aaee-20da3d6111f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:06.179058 master-0 kubenswrapper[33013]: I0313 11:14:06.179013 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "1bf1cd3c-9327-4a27-aaee-20da3d6111f1" (UID: "1bf1cd3c-9327-4a27-aaee-20da3d6111f1"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:14:06.180697 master-0 kubenswrapper[33013]: I0313 11:14:06.180662 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "1bf1cd3c-9327-4a27-aaee-20da3d6111f1" (UID: "1bf1cd3c-9327-4a27-aaee-20da3d6111f1"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 13 11:14:06.183699 master-0 kubenswrapper[33013]: I0313 11:14:06.183644 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-kube-api-access-7k2mb" (OuterVolumeSpecName: "kube-api-access-7k2mb") pod "1bf1cd3c-9327-4a27-aaee-20da3d6111f1" (UID: "1bf1cd3c-9327-4a27-aaee-20da3d6111f1"). InnerVolumeSpecName "kube-api-access-7k2mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:06.210168 master-0 kubenswrapper[33013]: I0313 11:14:06.210010 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-config-data" (OuterVolumeSpecName: "config-data") pod "1bf1cd3c-9327-4a27-aaee-20da3d6111f1" (UID: "1bf1cd3c-9327-4a27-aaee-20da3d6111f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:06.228175 master-0 kubenswrapper[33013]: I0313 11:14:06.228119 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bf1cd3c-9327-4a27-aaee-20da3d6111f1" (UID: "1bf1cd3c-9327-4a27-aaee-20da3d6111f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:06.278127 master-0 kubenswrapper[33013]: I0313 11:14:06.277942 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:06.278127 master-0 kubenswrapper[33013]: I0313 11:14:06.277979 33013 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:06.278127 master-0 kubenswrapper[33013]: I0313 11:14:06.277990 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:06.278127 master-0 kubenswrapper[33013]: I0313 11:14:06.278000 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k2mb\" (UniqueName: \"kubernetes.io/projected/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-kube-api-access-7k2mb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:06.278127 master-0 kubenswrapper[33013]: I0313 11:14:06.278008 33013 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:06.278127 master-0 kubenswrapper[33013]: I0313 11:14:06.278017 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf1cd3c-9327-4a27-aaee-20da3d6111f1-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:06.415514 master-0 kubenswrapper[33013]: I0313 11:14:06.407827 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-s6b9s" event={"ID":"1bf1cd3c-9327-4a27-aaee-20da3d6111f1","Type":"ContainerDied","Data":"d7758f1b6f512832ca59a4a46af29729a606768e35e39602418ecd319d1b214b"} Mar 13 11:14:06.415514 master-0 kubenswrapper[33013]: I0313 11:14:06.407883 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7758f1b6f512832ca59a4a46af29729a606768e35e39602418ecd319d1b214b" Mar 13 11:14:06.415514 master-0 kubenswrapper[33013]: I0313 11:14:06.410650 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-s6b9s" Mar 13 11:14:06.423129 master-0 kubenswrapper[33013]: I0313 11:14:06.423030 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-api-0" event={"ID":"6d97eda7-c4c6-42f3-bb49-12824d113ea9","Type":"ContainerStarted","Data":"0ee017d0e608401e49080bdbf64c36d62b151b90a3c688ccef28462e7219131c"} Mar 13 11:14:07.015569 master-0 kubenswrapper[33013]: I0313 11:14:06.998909 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-glgrm"] Mar 13 11:14:07.015569 master-0 kubenswrapper[33013]: E0313 11:14:07.000266 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf1cd3c-9327-4a27-aaee-20da3d6111f1" containerName="ironic-db-sync" Mar 13 11:14:07.015569 master-0 kubenswrapper[33013]: I0313 11:14:07.000294 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf1cd3c-9327-4a27-aaee-20da3d6111f1" containerName="ironic-db-sync" Mar 13 11:14:07.015569 master-0 kubenswrapper[33013]: E0313 11:14:07.000357 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf1cd3c-9327-4a27-aaee-20da3d6111f1" containerName="init" Mar 13 11:14:07.015569 master-0 kubenswrapper[33013]: I0313 11:14:07.000366 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf1cd3c-9327-4a27-aaee-20da3d6111f1" containerName="init" Mar 13 11:14:07.015569 master-0 kubenswrapper[33013]: I0313 11:14:07.000999 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf1cd3c-9327-4a27-aaee-20da3d6111f1" containerName="ironic-db-sync" Mar 13 11:14:07.023578 master-0 kubenswrapper[33013]: I0313 11:14:07.023408 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-glgrm" Mar 13 11:14:07.076898 master-0 kubenswrapper[33013]: I0313 11:14:07.072900 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-227ss\" (UniqueName: \"kubernetes.io/projected/92cce121-c716-4f17-8c76-edd30dec3d3b-kube-api-access-227ss\") pod \"ironic-inspector-db-create-glgrm\" (UID: \"92cce121-c716-4f17-8c76-edd30dec3d3b\") " pod="openstack/ironic-inspector-db-create-glgrm" Mar 13 11:14:07.076898 master-0 kubenswrapper[33013]: I0313 11:14:07.073902 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92cce121-c716-4f17-8c76-edd30dec3d3b-operator-scripts\") pod \"ironic-inspector-db-create-glgrm\" (UID: \"92cce121-c716-4f17-8c76-edd30dec3d3b\") " pod="openstack/ironic-inspector-db-create-glgrm" Mar 13 11:14:07.084342 master-0 kubenswrapper[33013]: I0313 11:14:07.084292 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-glgrm"] Mar 13 11:14:07.155615 master-0 kubenswrapper[33013]: I0313 11:14:07.152080 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-7f9d77888-kwqwh"] Mar 13 11:14:07.176613 master-0 kubenswrapper[33013]: I0313 11:14:07.166798 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:07.184298 master-0 kubenswrapper[33013]: I0313 11:14:07.181373 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np8zz\" (UniqueName: \"kubernetes.io/projected/3b99f02d-f8e2-497b-b68b-8e445e7b7541-kube-api-access-np8zz\") pod \"ironic-neutron-agent-7f9d77888-kwqwh\" (UID: \"3b99f02d-f8e2-497b-b68b-8e445e7b7541\") " pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:07.184298 master-0 kubenswrapper[33013]: I0313 11:14:07.181502 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b99f02d-f8e2-497b-b68b-8e445e7b7541-config\") pod \"ironic-neutron-agent-7f9d77888-kwqwh\" (UID: \"3b99f02d-f8e2-497b-b68b-8e445e7b7541\") " pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:07.184298 master-0 kubenswrapper[33013]: I0313 11:14:07.181533 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92cce121-c716-4f17-8c76-edd30dec3d3b-operator-scripts\") pod \"ironic-inspector-db-create-glgrm\" (UID: \"92cce121-c716-4f17-8c76-edd30dec3d3b\") " pod="openstack/ironic-inspector-db-create-glgrm" Mar 13 11:14:07.184298 master-0 kubenswrapper[33013]: I0313 11:14:07.181635 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b99f02d-f8e2-497b-b68b-8e445e7b7541-combined-ca-bundle\") pod \"ironic-neutron-agent-7f9d77888-kwqwh\" (UID: \"3b99f02d-f8e2-497b-b68b-8e445e7b7541\") " pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:07.184298 master-0 kubenswrapper[33013]: I0313 11:14:07.181734 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-227ss\" (UniqueName: \"kubernetes.io/projected/92cce121-c716-4f17-8c76-edd30dec3d3b-kube-api-access-227ss\") pod \"ironic-inspector-db-create-glgrm\" (UID: \"92cce121-c716-4f17-8c76-edd30dec3d3b\") " pod="openstack/ironic-inspector-db-create-glgrm" Mar 13 11:14:07.189314 master-0 kubenswrapper[33013]: I0313 11:14:07.188324 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92cce121-c716-4f17-8c76-edd30dec3d3b-operator-scripts\") pod \"ironic-inspector-db-create-glgrm\" (UID: \"92cce121-c716-4f17-8c76-edd30dec3d3b\") " pod="openstack/ironic-inspector-db-create-glgrm" Mar 13 11:14:07.202087 master-0 kubenswrapper[33013]: I0313 11:14:07.202017 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Mar 13 11:14:07.243030 master-0 kubenswrapper[33013]: I0313 11:14:07.242982 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-7f9d77888-kwqwh"] Mar 13 11:14:07.283414 master-0 kubenswrapper[33013]: I0313 11:14:07.283276 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np8zz\" (UniqueName: \"kubernetes.io/projected/3b99f02d-f8e2-497b-b68b-8e445e7b7541-kube-api-access-np8zz\") pod \"ironic-neutron-agent-7f9d77888-kwqwh\" (UID: \"3b99f02d-f8e2-497b-b68b-8e445e7b7541\") " pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:07.283867 master-0 kubenswrapper[33013]: I0313 11:14:07.283847 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b99f02d-f8e2-497b-b68b-8e445e7b7541-config\") pod \"ironic-neutron-agent-7f9d77888-kwqwh\" (UID: \"3b99f02d-f8e2-497b-b68b-8e445e7b7541\") " pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:07.284090 master-0 kubenswrapper[33013]: I0313 11:14:07.284071 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b99f02d-f8e2-497b-b68b-8e445e7b7541-combined-ca-bundle\") pod \"ironic-neutron-agent-7f9d77888-kwqwh\" (UID: \"3b99f02d-f8e2-497b-b68b-8e445e7b7541\") " pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:07.332509 master-0 kubenswrapper[33013]: I0313 11:14:07.330644 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-227ss\" (UniqueName: \"kubernetes.io/projected/92cce121-c716-4f17-8c76-edd30dec3d3b-kube-api-access-227ss\") pod \"ironic-inspector-db-create-glgrm\" (UID: \"92cce121-c716-4f17-8c76-edd30dec3d3b\") " pod="openstack/ironic-inspector-db-create-glgrm" Mar 13 11:14:07.358393 master-0 kubenswrapper[33013]: I0313 11:14:07.358345 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b99f02d-f8e2-497b-b68b-8e445e7b7541-config\") pod \"ironic-neutron-agent-7f9d77888-kwqwh\" (UID: \"3b99f02d-f8e2-497b-b68b-8e445e7b7541\") " pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:07.361564 master-0 kubenswrapper[33013]: I0313 11:14:07.361512 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np8zz\" (UniqueName: \"kubernetes.io/projected/3b99f02d-f8e2-497b-b68b-8e445e7b7541-kube-api-access-np8zz\") pod \"ironic-neutron-agent-7f9d77888-kwqwh\" (UID: \"3b99f02d-f8e2-497b-b68b-8e445e7b7541\") " pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:07.364801 master-0 kubenswrapper[33013]: I0313 11:14:07.364757 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b99f02d-f8e2-497b-b68b-8e445e7b7541-combined-ca-bundle\") pod \"ironic-neutron-agent-7f9d77888-kwqwh\" (UID: \"3b99f02d-f8e2-497b-b68b-8e445e7b7541\") " pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:07.380404 master-0 kubenswrapper[33013]: I0313 11:14:07.378122 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f9957b47c-z6jkk"] Mar 13 11:14:07.380404 master-0 kubenswrapper[33013]: I0313 11:14:07.378420 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" podUID="deda895d-6ed0-4306-85ad-3ea788b9d709" containerName="dnsmasq-dns" containerID="cri-o://8a7a92e579825dd014ed899494c8be7c4e2f5a76fad49addfb55886726f358c6" gracePeriod=10 Mar 13 11:14:07.380891 master-0 kubenswrapper[33013]: I0313 11:14:07.380665 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:14:07.454286 master-0 kubenswrapper[33013]: I0313 11:14:07.448519 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-4eb2-account-create-update-kgbrq"] Mar 13 11:14:07.454286 master-0 kubenswrapper[33013]: I0313 11:14:07.450336 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-4eb2-account-create-update-kgbrq" Mar 13 11:14:07.460694 master-0 kubenswrapper[33013]: I0313 11:14:07.458825 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Mar 13 11:14:07.526401 master-0 kubenswrapper[33013]: I0313 11:14:07.526177 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-api-0" event={"ID":"6d97eda7-c4c6-42f3-bb49-12824d113ea9","Type":"ContainerStarted","Data":"6f754ca8e016677f77f6ed329760f6b5469fd883c9c2799bc1bf16f17eb575dd"} Mar 13 11:14:07.534961 master-0 kubenswrapper[33013]: I0313 11:14:07.533223 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b494447c-sl7nf"] Mar 13 11:14:07.536047 master-0 kubenswrapper[33013]: I0313 11:14:07.535995 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.558616 master-0 kubenswrapper[33013]: I0313 11:14:07.552314 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-4eb2-account-create-update-kgbrq"] Mar 13 11:14:07.558616 master-0 kubenswrapper[33013]: I0313 11:14:07.552454 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-glgrm" Mar 13 11:14:07.611796 master-0 kubenswrapper[33013]: I0313 11:14:07.608934 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57hp9\" (UniqueName: \"kubernetes.io/projected/b3c23adf-bb65-4b0e-a687-c314205c0be8-kube-api-access-57hp9\") pod \"ironic-inspector-4eb2-account-create-update-kgbrq\" (UID: \"b3c23adf-bb65-4b0e-a687-c314205c0be8\") " pod="openstack/ironic-inspector-4eb2-account-create-update-kgbrq" Mar 13 11:14:07.611796 master-0 kubenswrapper[33013]: I0313 11:14:07.610814 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3c23adf-bb65-4b0e-a687-c314205c0be8-operator-scripts\") pod \"ironic-inspector-4eb2-account-create-update-kgbrq\" (UID: \"b3c23adf-bb65-4b0e-a687-c314205c0be8\") " pod="openstack/ironic-inspector-4eb2-account-create-update-kgbrq" Mar 13 11:14:07.620150 master-0 kubenswrapper[33013]: I0313 11:14:07.619565 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:07.641867 master-0 kubenswrapper[33013]: I0313 11:14:07.641791 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b494447c-sl7nf"] Mar 13 11:14:07.656134 master-0 kubenswrapper[33013]: I0313 11:14:07.653653 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-7657b6885c-5c572"] Mar 13 11:14:07.657415 master-0 kubenswrapper[33013]: I0313 11:14:07.657376 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.660945 master-0 kubenswrapper[33013]: I0313 11:14:07.660497 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Mar 13 11:14:07.660945 master-0 kubenswrapper[33013]: I0313 11:14:07.660615 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 13 11:14:07.660945 master-0 kubenswrapper[33013]: I0313 11:14:07.660496 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Mar 13 11:14:07.660945 master-0 kubenswrapper[33013]: I0313 11:14:07.660745 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Mar 13 11:14:07.660945 master-0 kubenswrapper[33013]: I0313 11:14:07.660836 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 13 11:14:07.701703 master-0 kubenswrapper[33013]: E0313 11:14:07.701647 33013 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddeda895d_6ed0_4306_85ad_3ea788b9d709.slice/crio-8a7a92e579825dd014ed899494c8be7c4e2f5a76fad49addfb55886726f358c6.scope\": RecentStats: unable to find data in memory cache]" Mar 13 11:14:07.720153 master-0 kubenswrapper[33013]: I0313 11:14:07.714862 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-ovsdbserver-nb\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.720153 master-0 kubenswrapper[33013]: I0313 11:14:07.714941 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3c23adf-bb65-4b0e-a687-c314205c0be8-operator-scripts\") pod \"ironic-inspector-4eb2-account-create-update-kgbrq\" (UID: \"b3c23adf-bb65-4b0e-a687-c314205c0be8\") " pod="openstack/ironic-inspector-4eb2-account-create-update-kgbrq" Mar 13 11:14:07.720153 master-0 kubenswrapper[33013]: I0313 11:14:07.714973 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-config\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.720153 master-0 kubenswrapper[33013]: I0313 11:14:07.715184 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-ovsdbserver-sb\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.720153 master-0 kubenswrapper[33013]: I0313 11:14:07.715480 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-dns-svc\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.720153 master-0 kubenswrapper[33013]: I0313 11:14:07.715638 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57hp9\" (UniqueName: \"kubernetes.io/projected/b3c23adf-bb65-4b0e-a687-c314205c0be8-kube-api-access-57hp9\") pod \"ironic-inspector-4eb2-account-create-update-kgbrq\" (UID: \"b3c23adf-bb65-4b0e-a687-c314205c0be8\") " pod="openstack/ironic-inspector-4eb2-account-create-update-kgbrq" Mar 13 11:14:07.720153 master-0 kubenswrapper[33013]: I0313 11:14:07.715749 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpc86\" (UniqueName: \"kubernetes.io/projected/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-kube-api-access-dpc86\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.720153 master-0 kubenswrapper[33013]: I0313 11:14:07.715753 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3c23adf-bb65-4b0e-a687-c314205c0be8-operator-scripts\") pod \"ironic-inspector-4eb2-account-create-update-kgbrq\" (UID: \"b3c23adf-bb65-4b0e-a687-c314205c0be8\") " pod="openstack/ironic-inspector-4eb2-account-create-update-kgbrq" Mar 13 11:14:07.720153 master-0 kubenswrapper[33013]: I0313 11:14:07.715860 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-dns-swift-storage-0\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.725736 master-0 kubenswrapper[33013]: I0313 11:14:07.725672 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-7657b6885c-5c572"] Mar 13 11:14:07.752995 master-0 kubenswrapper[33013]: I0313 11:14:07.752890 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57hp9\" (UniqueName: \"kubernetes.io/projected/b3c23adf-bb65-4b0e-a687-c314205c0be8-kube-api-access-57hp9\") pod \"ironic-inspector-4eb2-account-create-update-kgbrq\" (UID: \"b3c23adf-bb65-4b0e-a687-c314205c0be8\") " pod="openstack/ironic-inspector-4eb2-account-create-update-kgbrq" Mar 13 11:14:07.820358 master-0 kubenswrapper[33013]: I0313 11:14:07.820295 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-ovsdbserver-nb\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.820871 master-0 kubenswrapper[33013]: I0313 11:14:07.820853 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-config\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.821023 master-0 kubenswrapper[33013]: I0313 11:14:07.821009 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0341ff60-4819-448f-98f7-4ee8216d5d39-etc-podinfo\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.821180 master-0 kubenswrapper[33013]: I0313 11:14:07.821166 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-scripts\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.821399 master-0 kubenswrapper[33013]: I0313 11:14:07.821295 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-combined-ca-bundle\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.821671 master-0 kubenswrapper[33013]: I0313 11:14:07.821651 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-ovsdbserver-sb\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.821797 master-0 kubenswrapper[33013]: I0313 11:14:07.821772 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.822611 master-0 kubenswrapper[33013]: I0313 11:14:07.822570 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data-merged\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.823182 master-0 kubenswrapper[33013]: I0313 11:14:07.822689 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cbsx\" (UniqueName: \"kubernetes.io/projected/0341ff60-4819-448f-98f7-4ee8216d5d39-kube-api-access-9cbsx\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.823182 master-0 kubenswrapper[33013]: I0313 11:14:07.822761 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data-custom\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.823182 master-0 kubenswrapper[33013]: I0313 11:14:07.822816 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0341ff60-4819-448f-98f7-4ee8216d5d39-logs\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.823830 master-0 kubenswrapper[33013]: I0313 11:14:07.823448 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-dns-svc\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.824231 master-0 kubenswrapper[33013]: I0313 11:14:07.824168 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpc86\" (UniqueName: \"kubernetes.io/projected/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-kube-api-access-dpc86\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.824231 master-0 kubenswrapper[33013]: I0313 11:14:07.824206 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-dns-swift-storage-0\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.825390 master-0 kubenswrapper[33013]: I0313 11:14:07.825370 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-dns-swift-storage-0\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.826171 master-0 kubenswrapper[33013]: I0313 11:14:07.826152 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-ovsdbserver-nb\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.827985 master-0 kubenswrapper[33013]: I0313 11:14:07.827964 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-ovsdbserver-sb\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.830428 master-0 kubenswrapper[33013]: I0313 11:14:07.830380 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-config\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.835937 master-0 kubenswrapper[33013]: I0313 11:14:07.834970 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-dns-svc\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.870564 master-0 kubenswrapper[33013]: I0313 11:14:07.870416 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpc86\" (UniqueName: \"kubernetes.io/projected/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-kube-api-access-dpc86\") pod \"dnsmasq-dns-67b494447c-sl7nf\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:07.927868 master-0 kubenswrapper[33013]: I0313 11:14:07.927780 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.927868 master-0 kubenswrapper[33013]: I0313 11:14:07.927858 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data-merged\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.928084 master-0 kubenswrapper[33013]: I0313 11:14:07.927886 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cbsx\" (UniqueName: \"kubernetes.io/projected/0341ff60-4819-448f-98f7-4ee8216d5d39-kube-api-access-9cbsx\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.928084 master-0 kubenswrapper[33013]: I0313 11:14:07.927941 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data-custom\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.928084 master-0 kubenswrapper[33013]: I0313 11:14:07.927985 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0341ff60-4819-448f-98f7-4ee8216d5d39-logs\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.928330 master-0 kubenswrapper[33013]: I0313 11:14:07.928230 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0341ff60-4819-448f-98f7-4ee8216d5d39-etc-podinfo\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.928330 master-0 kubenswrapper[33013]: I0313 11:14:07.928274 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-scripts\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.928330 master-0 kubenswrapper[33013]: I0313 11:14:07.928295 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-combined-ca-bundle\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.939853 master-0 kubenswrapper[33013]: I0313 11:14:07.939739 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0341ff60-4819-448f-98f7-4ee8216d5d39-logs\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.959692 master-0 kubenswrapper[33013]: I0313 11:14:07.959550 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.965724 master-0 kubenswrapper[33013]: I0313 11:14:07.963406 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data-merged\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.975040 master-0 kubenswrapper[33013]: I0313 11:14:07.966558 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0341ff60-4819-448f-98f7-4ee8216d5d39-etc-podinfo\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.975040 master-0 kubenswrapper[33013]: I0313 11:14:07.966650 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-4eb2-account-create-update-kgbrq" Mar 13 11:14:07.975040 master-0 kubenswrapper[33013]: I0313 11:14:07.971571 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data-custom\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.975040 master-0 kubenswrapper[33013]: I0313 11:14:07.972950 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-scripts\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:07.975040 master-0 kubenswrapper[33013]: I0313 11:14:07.973899 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-combined-ca-bundle\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:08.003660 master-0 kubenswrapper[33013]: I0313 11:14:08.003505 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cbsx\" (UniqueName: \"kubernetes.io/projected/0341ff60-4819-448f-98f7-4ee8216d5d39-kube-api-access-9cbsx\") pod \"ironic-7657b6885c-5c572\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:08.079460 master-0 kubenswrapper[33013]: I0313 11:14:08.047452 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:08.083356 master-0 kubenswrapper[33013]: I0313 11:14:08.083295 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:08.212745 master-0 kubenswrapper[33013]: I0313 11:14:08.212697 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:08.315389 master-0 kubenswrapper[33013]: I0313 11:14:08.287696 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ceac4-scheduler-0"] Mar 13 11:14:08.474233 master-0 kubenswrapper[33013]: I0313 11:14:08.471415 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:14:08.525754 master-0 kubenswrapper[33013]: I0313 11:14:08.521952 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:08.568725 master-0 kubenswrapper[33013]: I0313 11:14:08.567265 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-config\") pod \"deda895d-6ed0-4306-85ad-3ea788b9d709\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " Mar 13 11:14:08.580660 master-0 kubenswrapper[33013]: I0313 11:14:08.578440 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-7f9d77888-kwqwh"] Mar 13 11:14:08.603091 master-0 kubenswrapper[33013]: I0313 11:14:08.603028 33013 generic.go:334] "Generic (PLEG): container finished" podID="deda895d-6ed0-4306-85ad-3ea788b9d709" containerID="8a7a92e579825dd014ed899494c8be7c4e2f5a76fad49addfb55886726f358c6" exitCode=0 Mar 13 11:14:08.603221 master-0 kubenswrapper[33013]: I0313 11:14:08.603192 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" event={"ID":"deda895d-6ed0-4306-85ad-3ea788b9d709","Type":"ContainerDied","Data":"8a7a92e579825dd014ed899494c8be7c4e2f5a76fad49addfb55886726f358c6"} Mar 13 11:14:08.603279 master-0 kubenswrapper[33013]: I0313 11:14:08.603236 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" event={"ID":"deda895d-6ed0-4306-85ad-3ea788b9d709","Type":"ContainerDied","Data":"7083b1612fd4492dff64deba2462199cff9c0ae9d5543d10e8a79747b9a6c58e"} Mar 13 11:14:08.603329 master-0 kubenswrapper[33013]: I0313 11:14:08.603301 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" Mar 13 11:14:08.603374 master-0 kubenswrapper[33013]: I0313 11:14:08.603269 33013 scope.go:117] "RemoveContainer" containerID="8a7a92e579825dd014ed899494c8be7c4e2f5a76fad49addfb55886726f358c6" Mar 13 11:14:08.618783 master-0 kubenswrapper[33013]: I0313 11:14:08.614972 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:08.640952 master-0 kubenswrapper[33013]: I0313 11:14:08.640828 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ceac4-scheduler-0" podUID="616c0183-dbe2-4f18-8391-0d1772b7f375" containerName="cinder-scheduler" containerID="cri-o://db9540b84c4bcd5308f09f6b463baf2bfbd12f01b6005000f33352eec609eaab" gracePeriod=30 Mar 13 11:14:08.642167 master-0 kubenswrapper[33013]: I0313 11:14:08.642083 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-api-0" event={"ID":"6d97eda7-c4c6-42f3-bb49-12824d113ea9","Type":"ContainerStarted","Data":"9a557e290e5ccef4cf7b37f23e686fd591b2101e1ea951b9cc47a51291061e52"} Mar 13 11:14:08.642445 master-0 kubenswrapper[33013]: I0313 11:14:08.642234 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ceac4-scheduler-0" podUID="616c0183-dbe2-4f18-8391-0d1772b7f375" containerName="probe" containerID="cri-o://0abc6675ab4a8fb48fac7693d560dabe62037ba9ff706a87f79e0e6be3a17f9c" gracePeriod=30 Mar 13 11:14:08.642741 master-0 kubenswrapper[33013]: I0313 11:14:08.642548 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:08.651802 master-0 kubenswrapper[33013]: I0313 11:14:08.651744 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-config" (OuterVolumeSpecName: "config") pod "deda895d-6ed0-4306-85ad-3ea788b9d709" (UID: "deda895d-6ed0-4306-85ad-3ea788b9d709"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:08.665271 master-0 kubenswrapper[33013]: I0313 11:14:08.665079 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ceac4-backup-0"] Mar 13 11:14:08.686921 master-0 kubenswrapper[33013]: I0313 11:14:08.679103 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-ovsdbserver-nb\") pod \"deda895d-6ed0-4306-85ad-3ea788b9d709\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " Mar 13 11:14:08.686921 master-0 kubenswrapper[33013]: I0313 11:14:08.679222 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-dns-svc\") pod \"deda895d-6ed0-4306-85ad-3ea788b9d709\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " Mar 13 11:14:08.686921 master-0 kubenswrapper[33013]: I0313 11:14:08.679253 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-ovsdbserver-sb\") pod \"deda895d-6ed0-4306-85ad-3ea788b9d709\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " Mar 13 11:14:08.686921 master-0 kubenswrapper[33013]: I0313 11:14:08.679283 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-dns-swift-storage-0\") pod \"deda895d-6ed0-4306-85ad-3ea788b9d709\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " Mar 13 11:14:08.686921 master-0 kubenswrapper[33013]: I0313 11:14:08.679440 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88p4r\" (UniqueName: \"kubernetes.io/projected/deda895d-6ed0-4306-85ad-3ea788b9d709-kube-api-access-88p4r\") pod \"deda895d-6ed0-4306-85ad-3ea788b9d709\" (UID: \"deda895d-6ed0-4306-85ad-3ea788b9d709\") " Mar 13 11:14:08.694556 master-0 kubenswrapper[33013]: I0313 11:14:08.694435 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:08.703259 master-0 kubenswrapper[33013]: I0313 11:14:08.703185 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deda895d-6ed0-4306-85ad-3ea788b9d709-kube-api-access-88p4r" (OuterVolumeSpecName: "kube-api-access-88p4r") pod "deda895d-6ed0-4306-85ad-3ea788b9d709" (UID: "deda895d-6ed0-4306-85ad-3ea788b9d709"). InnerVolumeSpecName "kube-api-access-88p4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:08.753799 master-0 kubenswrapper[33013]: I0313 11:14:08.753721 33013 scope.go:117] "RemoveContainer" containerID="55507216c8ec51ba82d1f9f159881e8cb4fe29ebca1437e1c615d859d271de6d" Mar 13 11:14:08.796799 master-0 kubenswrapper[33013]: I0313 11:14:08.796469 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-glgrm"] Mar 13 11:14:08.804286 master-0 kubenswrapper[33013]: I0313 11:14:08.798951 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88p4r\" (UniqueName: \"kubernetes.io/projected/deda895d-6ed0-4306-85ad-3ea788b9d709-kube-api-access-88p4r\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:08.810290 master-0 kubenswrapper[33013]: I0313 11:14:08.810245 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ceac4-volume-lvm-iscsi-0"] Mar 13 11:14:08.824341 master-0 kubenswrapper[33013]: I0313 11:14:08.817447 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ceac4-api-0" podStartSLOduration=4.817421132 podStartE2EDuration="4.817421132s" podCreationTimestamp="2026-03-13 11:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:14:08.782717469 +0000 UTC m=+1032.258670818" watchObservedRunningTime="2026-03-13 11:14:08.817421132 +0000 UTC m=+1032.293374481" Mar 13 11:14:08.839542 master-0 kubenswrapper[33013]: I0313 11:14:08.839452 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "deda895d-6ed0-4306-85ad-3ea788b9d709" (UID: "deda895d-6ed0-4306-85ad-3ea788b9d709"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:08.852793 master-0 kubenswrapper[33013]: I0313 11:14:08.850222 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "deda895d-6ed0-4306-85ad-3ea788b9d709" (UID: "deda895d-6ed0-4306-85ad-3ea788b9d709"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:08.854805 master-0 kubenswrapper[33013]: I0313 11:14:08.854748 33013 scope.go:117] "RemoveContainer" containerID="8a7a92e579825dd014ed899494c8be7c4e2f5a76fad49addfb55886726f358c6" Mar 13 11:14:08.855285 master-0 kubenswrapper[33013]: E0313 11:14:08.855254 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a7a92e579825dd014ed899494c8be7c4e2f5a76fad49addfb55886726f358c6\": container with ID starting with 8a7a92e579825dd014ed899494c8be7c4e2f5a76fad49addfb55886726f358c6 not found: ID does not exist" containerID="8a7a92e579825dd014ed899494c8be7c4e2f5a76fad49addfb55886726f358c6" Mar 13 11:14:08.855340 master-0 kubenswrapper[33013]: I0313 11:14:08.855289 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a7a92e579825dd014ed899494c8be7c4e2f5a76fad49addfb55886726f358c6"} err="failed to get container status \"8a7a92e579825dd014ed899494c8be7c4e2f5a76fad49addfb55886726f358c6\": rpc error: code = NotFound desc = could not find container \"8a7a92e579825dd014ed899494c8be7c4e2f5a76fad49addfb55886726f358c6\": container with ID starting with 8a7a92e579825dd014ed899494c8be7c4e2f5a76fad49addfb55886726f358c6 not found: ID does not exist" Mar 13 11:14:08.855340 master-0 kubenswrapper[33013]: I0313 11:14:08.855311 33013 scope.go:117] "RemoveContainer" containerID="55507216c8ec51ba82d1f9f159881e8cb4fe29ebca1437e1c615d859d271de6d" Mar 13 11:14:08.855615 master-0 kubenswrapper[33013]: E0313 11:14:08.855568 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55507216c8ec51ba82d1f9f159881e8cb4fe29ebca1437e1c615d859d271de6d\": container with ID starting with 55507216c8ec51ba82d1f9f159881e8cb4fe29ebca1437e1c615d859d271de6d not found: ID does not exist" containerID="55507216c8ec51ba82d1f9f159881e8cb4fe29ebca1437e1c615d859d271de6d" Mar 13 11:14:08.855707 master-0 kubenswrapper[33013]: I0313 11:14:08.855634 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55507216c8ec51ba82d1f9f159881e8cb4fe29ebca1437e1c615d859d271de6d"} err="failed to get container status \"55507216c8ec51ba82d1f9f159881e8cb4fe29ebca1437e1c615d859d271de6d\": rpc error: code = NotFound desc = could not find container \"55507216c8ec51ba82d1f9f159881e8cb4fe29ebca1437e1c615d859d271de6d\": container with ID starting with 55507216c8ec51ba82d1f9f159881e8cb4fe29ebca1437e1c615d859d271de6d not found: ID does not exist" Mar 13 11:14:08.863615 master-0 kubenswrapper[33013]: W0313 11:14:08.862320 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92cce121_c716_4f17_8c76_edd30dec3d3b.slice/crio-dbad163bbdecff94b53c4f1d01c1ad80eac45c0da79c439694737c1cb682bf97 WatchSource:0}: Error finding container dbad163bbdecff94b53c4f1d01c1ad80eac45c0da79c439694737c1cb682bf97: Status 404 returned error can't find the container with id dbad163bbdecff94b53c4f1d01c1ad80eac45c0da79c439694737c1cb682bf97 Mar 13 11:14:08.868959 master-0 kubenswrapper[33013]: I0313 11:14:08.868705 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "deda895d-6ed0-4306-85ad-3ea788b9d709" (UID: "deda895d-6ed0-4306-85ad-3ea788b9d709"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:08.879631 master-0 kubenswrapper[33013]: I0313 11:14:08.871791 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "deda895d-6ed0-4306-85ad-3ea788b9d709" (UID: "deda895d-6ed0-4306-85ad-3ea788b9d709"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:08.908759 master-0 kubenswrapper[33013]: I0313 11:14:08.901805 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:08.908759 master-0 kubenswrapper[33013]: I0313 11:14:08.901915 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:08.908759 master-0 kubenswrapper[33013]: I0313 11:14:08.901930 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:08.908759 master-0 kubenswrapper[33013]: I0313 11:14:08.901941 33013 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deda895d-6ed0-4306-85ad-3ea788b9d709-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:09.050363 master-0 kubenswrapper[33013]: I0313 11:14:09.046655 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f9957b47c-z6jkk"] Mar 13 11:14:09.177181 master-0 kubenswrapper[33013]: I0313 11:14:09.173227 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f9957b47c-z6jkk"] Mar 13 11:14:09.186720 master-0 kubenswrapper[33013]: I0313 11:14:09.186401 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b494447c-sl7nf"] Mar 13 11:14:09.204206 master-0 kubenswrapper[33013]: I0313 11:14:09.202198 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Mar 13 11:14:09.204206 master-0 kubenswrapper[33013]: E0313 11:14:09.203274 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deda895d-6ed0-4306-85ad-3ea788b9d709" containerName="init" Mar 13 11:14:09.204206 master-0 kubenswrapper[33013]: I0313 11:14:09.203300 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="deda895d-6ed0-4306-85ad-3ea788b9d709" containerName="init" Mar 13 11:14:09.204206 master-0 kubenswrapper[33013]: E0313 11:14:09.203314 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deda895d-6ed0-4306-85ad-3ea788b9d709" containerName="dnsmasq-dns" Mar 13 11:14:09.204206 master-0 kubenswrapper[33013]: I0313 11:14:09.203322 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="deda895d-6ed0-4306-85ad-3ea788b9d709" containerName="dnsmasq-dns" Mar 13 11:14:09.204206 master-0 kubenswrapper[33013]: I0313 11:14:09.203723 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="deda895d-6ed0-4306-85ad-3ea788b9d709" containerName="dnsmasq-dns" Mar 13 11:14:09.208019 master-0 kubenswrapper[33013]: I0313 11:14:09.207939 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 13 11:14:09.212338 master-0 kubenswrapper[33013]: I0313 11:14:09.211709 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Mar 13 11:14:09.214835 master-0 kubenswrapper[33013]: I0313 11:14:09.214792 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Mar 13 11:14:09.242534 master-0 kubenswrapper[33013]: I0313 11:14:09.242462 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 13 11:14:09.315150 master-0 kubenswrapper[33013]: I0313 11:14:09.314255 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e16baf7d-8440-4431-a184-523ae34f6e6f-config-data\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.315150 master-0 kubenswrapper[33013]: I0313 11:14:09.314325 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3e1e7ab4-9e0c-41af-b7cd-465b872e71ad\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0874bf4a-4480-418a-ad82-97dbd5e10d31\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.315150 master-0 kubenswrapper[33013]: I0313 11:14:09.314360 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntp8t\" (UniqueName: \"kubernetes.io/projected/e16baf7d-8440-4431-a184-523ae34f6e6f-kube-api-access-ntp8t\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.315150 master-0 kubenswrapper[33013]: I0313 11:14:09.314411 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e16baf7d-8440-4431-a184-523ae34f6e6f-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.315150 master-0 kubenswrapper[33013]: I0313 11:14:09.314439 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16baf7d-8440-4431-a184-523ae34f6e6f-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.315150 master-0 kubenswrapper[33013]: I0313 11:14:09.314469 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e16baf7d-8440-4431-a184-523ae34f6e6f-scripts\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.315150 master-0 kubenswrapper[33013]: I0313 11:14:09.314490 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e16baf7d-8440-4431-a184-523ae34f6e6f-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.315150 master-0 kubenswrapper[33013]: I0313 11:14:09.314516 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e16baf7d-8440-4431-a184-523ae34f6e6f-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.339433 master-0 kubenswrapper[33013]: W0313 11:14:09.339372 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3c23adf_bb65_4b0e_a687_c314205c0be8.slice/crio-83c0c5281421cce910620397878775d3f79baf918898deb0672efcb84351dc92 WatchSource:0}: Error finding container 83c0c5281421cce910620397878775d3f79baf918898deb0672efcb84351dc92: Status 404 returned error can't find the container with id 83c0c5281421cce910620397878775d3f79baf918898deb0672efcb84351dc92 Mar 13 11:14:09.350351 master-0 kubenswrapper[33013]: I0313 11:14:09.350302 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-4eb2-account-create-update-kgbrq"] Mar 13 11:14:09.417127 master-0 kubenswrapper[33013]: I0313 11:14:09.417049 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntp8t\" (UniqueName: \"kubernetes.io/projected/e16baf7d-8440-4431-a184-523ae34f6e6f-kube-api-access-ntp8t\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.417388 master-0 kubenswrapper[33013]: I0313 11:14:09.417160 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e16baf7d-8440-4431-a184-523ae34f6e6f-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.417388 master-0 kubenswrapper[33013]: I0313 11:14:09.417216 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16baf7d-8440-4431-a184-523ae34f6e6f-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.417388 master-0 kubenswrapper[33013]: I0313 11:14:09.417266 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e16baf7d-8440-4431-a184-523ae34f6e6f-scripts\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.417388 master-0 kubenswrapper[33013]: I0313 11:14:09.417297 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e16baf7d-8440-4431-a184-523ae34f6e6f-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.417388 master-0 kubenswrapper[33013]: I0313 11:14:09.417340 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e16baf7d-8440-4431-a184-523ae34f6e6f-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.417663 master-0 kubenswrapper[33013]: I0313 11:14:09.417634 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e16baf7d-8440-4431-a184-523ae34f6e6f-config-data\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.417708 master-0 kubenswrapper[33013]: I0313 11:14:09.417683 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3e1e7ab4-9e0c-41af-b7cd-465b872e71ad\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0874bf4a-4480-418a-ad82-97dbd5e10d31\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.418893 master-0 kubenswrapper[33013]: I0313 11:14:09.418839 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e16baf7d-8440-4431-a184-523ae34f6e6f-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.435793 master-0 kubenswrapper[33013]: I0313 11:14:09.434778 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:14:09.435793 master-0 kubenswrapper[33013]: I0313 11:14:09.434843 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3e1e7ab4-9e0c-41af-b7cd-465b872e71ad\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0874bf4a-4480-418a-ad82-97dbd5e10d31\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/4ef6849eba2952320323a959655635aa9a75d741e6ae78eb4e7d90b323e051d2/globalmount\"" pod="openstack/ironic-conductor-0" Mar 13 11:14:09.442293 master-0 kubenswrapper[33013]: I0313 11:14:09.440882 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16baf7d-8440-4431-a184-523ae34f6e6f-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.442902 master-0 kubenswrapper[33013]: I0313 11:14:09.442848 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e16baf7d-8440-4431-a184-523ae34f6e6f-scripts\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.453275 master-0 kubenswrapper[33013]: I0313 11:14:09.453226 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e16baf7d-8440-4431-a184-523ae34f6e6f-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.454564 master-0 kubenswrapper[33013]: I0313 11:14:09.454519 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e16baf7d-8440-4431-a184-523ae34f6e6f-config-data\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.461814 master-0 kubenswrapper[33013]: I0313 11:14:09.461731 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e16baf7d-8440-4431-a184-523ae34f6e6f-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.488002 master-0 kubenswrapper[33013]: I0313 11:14:09.483501 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntp8t\" (UniqueName: \"kubernetes.io/projected/e16baf7d-8440-4431-a184-523ae34f6e6f-kube-api-access-ntp8t\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:09.595257 master-0 kubenswrapper[33013]: I0313 11:14:09.595207 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-7657b6885c-5c572"] Mar 13 11:14:09.678220 master-0 kubenswrapper[33013]: I0313 11:14:09.674934 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" event={"ID":"3b99f02d-f8e2-497b-b68b-8e445e7b7541","Type":"ContainerStarted","Data":"9209d677ce5b9fd129c7d86bb4d24bdfee48849a778fe356a73ee060607eef33"} Mar 13 11:14:09.683382 master-0 kubenswrapper[33013]: I0313 11:14:09.683278 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-glgrm" event={"ID":"92cce121-c716-4f17-8c76-edd30dec3d3b","Type":"ContainerStarted","Data":"ad3ccc22509ab5a42f57fc2ab4c6e38abc497e8fc18f9294523be5ce55eb8c1a"} Mar 13 11:14:09.683382 master-0 kubenswrapper[33013]: I0313 11:14:09.683360 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-glgrm" event={"ID":"92cce121-c716-4f17-8c76-edd30dec3d3b","Type":"ContainerStarted","Data":"dbad163bbdecff94b53c4f1d01c1ad80eac45c0da79c439694737c1cb682bf97"} Mar 13 11:14:09.697280 master-0 kubenswrapper[33013]: I0313 11:14:09.697109 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b494447c-sl7nf" event={"ID":"69fa6a94-1b94-44ad-b7b3-5294d3f76e57","Type":"ContainerStarted","Data":"d2a0ad8e58a99916b42c31acc96d2df58336aa5cac6ad275fb3de671d7e86b46"} Mar 13 11:14:09.701416 master-0 kubenswrapper[33013]: I0313 11:14:09.701328 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-4eb2-account-create-update-kgbrq" event={"ID":"b3c23adf-bb65-4b0e-a687-c314205c0be8","Type":"ContainerStarted","Data":"83c0c5281421cce910620397878775d3f79baf918898deb0672efcb84351dc92"} Mar 13 11:14:09.708860 master-0 kubenswrapper[33013]: I0313 11:14:09.708793 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7657b6885c-5c572" event={"ID":"0341ff60-4819-448f-98f7-4ee8216d5d39","Type":"ContainerStarted","Data":"31df7c8ccb67e3a199b7e432cff02938c0487cd96e8929492a7791ae6ddb110e"} Mar 13 11:14:09.713314 master-0 kubenswrapper[33013]: I0313 11:14:09.713207 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ceac4-backup-0" podUID="f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" containerName="cinder-backup" containerID="cri-o://1dce2530df1b96681748551a56a0d2149e701f021d5e492b94854bf68b6cb561" gracePeriod=30 Mar 13 11:14:09.713571 master-0 kubenswrapper[33013]: I0313 11:14:09.713235 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ceac4-backup-0" podUID="f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" containerName="probe" containerID="cri-o://f4afc239bfc8e18d4e3ef40f2181f5ebe74b92387a5f498b902a27284eba8bf3" gracePeriod=30 Mar 13 11:14:09.715807 master-0 kubenswrapper[33013]: I0313 11:14:09.713473 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" podUID="b4d9d572-4830-4c6f-aa61-e836816ec94b" containerName="cinder-volume" containerID="cri-o://92136a8ac6376648da2845880d840c334bd57ffb3374dec57d9521620552252f" gracePeriod=30 Mar 13 11:14:09.716057 master-0 kubenswrapper[33013]: I0313 11:14:09.713566 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" podUID="b4d9d572-4830-4c6f-aa61-e836816ec94b" containerName="probe" containerID="cri-o://8e5661b2a39e4da32a8d1b9522f5dc22eb5bdd6b17f475d0ac62686eedb5dcd8" gracePeriod=30 Mar 13 11:14:09.740754 master-0 kubenswrapper[33013]: I0313 11:14:09.740639 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-create-glgrm" podStartSLOduration=3.740607388 podStartE2EDuration="3.740607388s" podCreationTimestamp="2026-03-13 11:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:14:09.737233782 +0000 UTC m=+1033.213187131" watchObservedRunningTime="2026-03-13 11:14:09.740607388 +0000 UTC m=+1033.216560737" Mar 13 11:14:10.729024 master-0 kubenswrapper[33013]: I0313 11:14:10.728964 33013 generic.go:334] "Generic (PLEG): container finished" podID="f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" containerID="f4afc239bfc8e18d4e3ef40f2181f5ebe74b92387a5f498b902a27284eba8bf3" exitCode=0 Mar 13 11:14:10.729757 master-0 kubenswrapper[33013]: I0313 11:14:10.729734 33013 generic.go:334] "Generic (PLEG): container finished" podID="f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" containerID="1dce2530df1b96681748551a56a0d2149e701f021d5e492b94854bf68b6cb561" exitCode=0 Mar 13 11:14:10.732236 master-0 kubenswrapper[33013]: I0313 11:14:10.732024 33013 generic.go:334] "Generic (PLEG): container finished" podID="92cce121-c716-4f17-8c76-edd30dec3d3b" containerID="ad3ccc22509ab5a42f57fc2ab4c6e38abc497e8fc18f9294523be5ce55eb8c1a" exitCode=0 Mar 13 11:14:10.734797 master-0 kubenswrapper[33013]: I0313 11:14:10.734767 33013 generic.go:334] "Generic (PLEG): container finished" podID="b4d9d572-4830-4c6f-aa61-e836816ec94b" containerID="8e5661b2a39e4da32a8d1b9522f5dc22eb5bdd6b17f475d0ac62686eedb5dcd8" exitCode=0 Mar 13 11:14:10.734881 master-0 kubenswrapper[33013]: I0313 11:14:10.734797 33013 generic.go:334] "Generic (PLEG): container finished" podID="b4d9d572-4830-4c6f-aa61-e836816ec94b" containerID="92136a8ac6376648da2845880d840c334bd57ffb3374dec57d9521620552252f" exitCode=0 Mar 13 11:14:10.735000 master-0 kubenswrapper[33013]: I0313 11:14:10.734958 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deda895d-6ed0-4306-85ad-3ea788b9d709" path="/var/lib/kubelet/pods/deda895d-6ed0-4306-85ad-3ea788b9d709/volumes" Mar 13 11:14:10.737145 master-0 kubenswrapper[33013]: I0313 11:14:10.737105 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-backup-0" event={"ID":"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8","Type":"ContainerDied","Data":"f4afc239bfc8e18d4e3ef40f2181f5ebe74b92387a5f498b902a27284eba8bf3"} Mar 13 11:14:10.737223 master-0 kubenswrapper[33013]: I0313 11:14:10.737150 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-backup-0" event={"ID":"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8","Type":"ContainerDied","Data":"1dce2530df1b96681748551a56a0d2149e701f021d5e492b94854bf68b6cb561"} Mar 13 11:14:10.737223 master-0 kubenswrapper[33013]: I0313 11:14:10.737168 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-glgrm" event={"ID":"92cce121-c716-4f17-8c76-edd30dec3d3b","Type":"ContainerDied","Data":"ad3ccc22509ab5a42f57fc2ab4c6e38abc497e8fc18f9294523be5ce55eb8c1a"} Mar 13 11:14:10.737223 master-0 kubenswrapper[33013]: I0313 11:14:10.737190 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" event={"ID":"b4d9d572-4830-4c6f-aa61-e836816ec94b","Type":"ContainerDied","Data":"8e5661b2a39e4da32a8d1b9522f5dc22eb5bdd6b17f475d0ac62686eedb5dcd8"} Mar 13 11:14:10.737223 master-0 kubenswrapper[33013]: I0313 11:14:10.737204 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" event={"ID":"b4d9d572-4830-4c6f-aa61-e836816ec94b","Type":"ContainerDied","Data":"92136a8ac6376648da2845880d840c334bd57ffb3374dec57d9521620552252f"} Mar 13 11:14:10.737431 master-0 kubenswrapper[33013]: I0313 11:14:10.737400 33013 generic.go:334] "Generic (PLEG): container finished" podID="69fa6a94-1b94-44ad-b7b3-5294d3f76e57" containerID="b6ce79bb0e7c0d40ddb6c669b378f19e75703411b1813afa0f48402ba562c62a" exitCode=0 Mar 13 11:14:10.737549 master-0 kubenswrapper[33013]: I0313 11:14:10.737517 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b494447c-sl7nf" event={"ID":"69fa6a94-1b94-44ad-b7b3-5294d3f76e57","Type":"ContainerDied","Data":"b6ce79bb0e7c0d40ddb6c669b378f19e75703411b1813afa0f48402ba562c62a"} Mar 13 11:14:10.746408 master-0 kubenswrapper[33013]: I0313 11:14:10.746341 33013 generic.go:334] "Generic (PLEG): container finished" podID="b3c23adf-bb65-4b0e-a687-c314205c0be8" containerID="66349258a27e5e5be6a59dcb50423a9f2dcdd4473bd9b96ed64ab824e4ba1204" exitCode=0 Mar 13 11:14:10.746520 master-0 kubenswrapper[33013]: I0313 11:14:10.746441 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-4eb2-account-create-update-kgbrq" event={"ID":"b3c23adf-bb65-4b0e-a687-c314205c0be8","Type":"ContainerDied","Data":"66349258a27e5e5be6a59dcb50423a9f2dcdd4473bd9b96ed64ab824e4ba1204"} Mar 13 11:14:10.752423 master-0 kubenswrapper[33013]: I0313 11:14:10.752277 33013 generic.go:334] "Generic (PLEG): container finished" podID="616c0183-dbe2-4f18-8391-0d1772b7f375" containerID="0abc6675ab4a8fb48fac7693d560dabe62037ba9ff706a87f79e0e6be3a17f9c" exitCode=0 Mar 13 11:14:10.752423 master-0 kubenswrapper[33013]: I0313 11:14:10.752342 33013 generic.go:334] "Generic (PLEG): container finished" podID="616c0183-dbe2-4f18-8391-0d1772b7f375" containerID="db9540b84c4bcd5308f09f6b463baf2bfbd12f01b6005000f33352eec609eaab" exitCode=0 Mar 13 11:14:10.752423 master-0 kubenswrapper[33013]: I0313 11:14:10.752384 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-scheduler-0" event={"ID":"616c0183-dbe2-4f18-8391-0d1772b7f375","Type":"ContainerDied","Data":"0abc6675ab4a8fb48fac7693d560dabe62037ba9ff706a87f79e0e6be3a17f9c"} Mar 13 11:14:10.752644 master-0 kubenswrapper[33013]: I0313 11:14:10.752431 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-scheduler-0" event={"ID":"616c0183-dbe2-4f18-8391-0d1772b7f375","Type":"ContainerDied","Data":"db9540b84c4bcd5308f09f6b463baf2bfbd12f01b6005000f33352eec609eaab"} Mar 13 11:14:11.009760 master-0 kubenswrapper[33013]: I0313 11:14:11.009684 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3e1e7ab4-9e0c-41af-b7cd-465b872e71ad\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0874bf4a-4480-418a-ad82-97dbd5e10d31\") pod \"ironic-conductor-0\" (UID: \"e16baf7d-8440-4431-a184-523ae34f6e6f\") " pod="openstack/ironic-conductor-0" Mar 13 11:14:11.165664 master-0 kubenswrapper[33013]: I0313 11:14:11.165462 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 13 11:14:11.723417 master-0 kubenswrapper[33013]: I0313 11:14:11.723351 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:11.803956 master-0 kubenswrapper[33013]: I0313 11:14:11.803143 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:11.803956 master-0 kubenswrapper[33013]: I0313 11:14:11.803352 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-scheduler-0" event={"ID":"616c0183-dbe2-4f18-8391-0d1772b7f375","Type":"ContainerDied","Data":"d367b194786e47261bdd0eddf10193cac82b4d89a67ec44e99657fad21dcdecd"} Mar 13 11:14:11.803956 master-0 kubenswrapper[33013]: I0313 11:14:11.803429 33013 scope.go:117] "RemoveContainer" containerID="0abc6675ab4a8fb48fac7693d560dabe62037ba9ff706a87f79e0e6be3a17f9c" Mar 13 11:14:11.861155 master-0 kubenswrapper[33013]: I0313 11:14:11.860484 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-config-data\") pod \"616c0183-dbe2-4f18-8391-0d1772b7f375\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " Mar 13 11:14:11.861155 master-0 kubenswrapper[33013]: I0313 11:14:11.860634 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-scripts\") pod \"616c0183-dbe2-4f18-8391-0d1772b7f375\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " Mar 13 11:14:11.861155 master-0 kubenswrapper[33013]: I0313 11:14:11.860861 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-config-data-custom\") pod \"616c0183-dbe2-4f18-8391-0d1772b7f375\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " Mar 13 11:14:11.861155 master-0 kubenswrapper[33013]: I0313 11:14:11.860982 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/616c0183-dbe2-4f18-8391-0d1772b7f375-etc-machine-id\") pod \"616c0183-dbe2-4f18-8391-0d1772b7f375\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " Mar 13 11:14:11.861155 master-0 kubenswrapper[33013]: I0313 11:14:11.861088 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616c0183-dbe2-4f18-8391-0d1772b7f375-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "616c0183-dbe2-4f18-8391-0d1772b7f375" (UID: "616c0183-dbe2-4f18-8391-0d1772b7f375"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:11.861155 master-0 kubenswrapper[33013]: I0313 11:14:11.861127 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-combined-ca-bundle\") pod \"616c0183-dbe2-4f18-8391-0d1772b7f375\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " Mar 13 11:14:11.861155 master-0 kubenswrapper[33013]: I0313 11:14:11.861148 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx6kg\" (UniqueName: \"kubernetes.io/projected/616c0183-dbe2-4f18-8391-0d1772b7f375-kube-api-access-rx6kg\") pod \"616c0183-dbe2-4f18-8391-0d1772b7f375\" (UID: \"616c0183-dbe2-4f18-8391-0d1772b7f375\") " Mar 13 11:14:11.863192 master-0 kubenswrapper[33013]: I0313 11:14:11.861825 33013 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/616c0183-dbe2-4f18-8391-0d1772b7f375-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:11.874043 master-0 kubenswrapper[33013]: I0313 11:14:11.873972 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "616c0183-dbe2-4f18-8391-0d1772b7f375" (UID: "616c0183-dbe2-4f18-8391-0d1772b7f375"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:11.899615 master-0 kubenswrapper[33013]: I0313 11:14:11.897323 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/616c0183-dbe2-4f18-8391-0d1772b7f375-kube-api-access-rx6kg" (OuterVolumeSpecName: "kube-api-access-rx6kg") pod "616c0183-dbe2-4f18-8391-0d1772b7f375" (UID: "616c0183-dbe2-4f18-8391-0d1772b7f375"). InnerVolumeSpecName "kube-api-access-rx6kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:11.899615 master-0 kubenswrapper[33013]: I0313 11:14:11.897427 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-scripts" (OuterVolumeSpecName: "scripts") pod "616c0183-dbe2-4f18-8391-0d1772b7f375" (UID: "616c0183-dbe2-4f18-8391-0d1772b7f375"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:11.970266 master-0 kubenswrapper[33013]: I0313 11:14:11.967126 33013 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:11.970266 master-0 kubenswrapper[33013]: I0313 11:14:11.967165 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx6kg\" (UniqueName: \"kubernetes.io/projected/616c0183-dbe2-4f18-8391-0d1772b7f375-kube-api-access-rx6kg\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:11.970266 master-0 kubenswrapper[33013]: I0313 11:14:11.967178 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:12.072160 master-0 kubenswrapper[33013]: I0313 11:14:12.061771 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-5987cf94cc-zcxvf"] Mar 13 11:14:12.072160 master-0 kubenswrapper[33013]: E0313 11:14:12.062447 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c0183-dbe2-4f18-8391-0d1772b7f375" containerName="cinder-scheduler" Mar 13 11:14:12.072160 master-0 kubenswrapper[33013]: I0313 11:14:12.062466 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c0183-dbe2-4f18-8391-0d1772b7f375" containerName="cinder-scheduler" Mar 13 11:14:12.072160 master-0 kubenswrapper[33013]: E0313 11:14:12.062486 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616c0183-dbe2-4f18-8391-0d1772b7f375" containerName="probe" Mar 13 11:14:12.072160 master-0 kubenswrapper[33013]: I0313 11:14:12.062494 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="616c0183-dbe2-4f18-8391-0d1772b7f375" containerName="probe" Mar 13 11:14:12.072160 master-0 kubenswrapper[33013]: I0313 11:14:12.070145 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="616c0183-dbe2-4f18-8391-0d1772b7f375" containerName="probe" Mar 13 11:14:12.072160 master-0 kubenswrapper[33013]: I0313 11:14:12.070207 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="616c0183-dbe2-4f18-8391-0d1772b7f375" containerName="cinder-scheduler" Mar 13 11:14:12.072703 master-0 kubenswrapper[33013]: I0313 11:14:12.072261 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.091032 master-0 kubenswrapper[33013]: I0313 11:14:12.076030 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Mar 13 11:14:12.091032 master-0 kubenswrapper[33013]: I0313 11:14:12.076382 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Mar 13 11:14:12.091032 master-0 kubenswrapper[33013]: I0313 11:14:12.088879 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5987cf94cc-zcxvf"] Mar 13 11:14:12.204738 master-0 kubenswrapper[33013]: I0313 11:14:12.203494 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-config-data-merged\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.204738 master-0 kubenswrapper[33013]: I0313 11:14:12.203572 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-config-data-custom\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.204738 master-0 kubenswrapper[33013]: I0313 11:14:12.203613 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-config-data\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.204738 master-0 kubenswrapper[33013]: I0313 11:14:12.203642 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-combined-ca-bundle\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.204738 master-0 kubenswrapper[33013]: I0313 11:14:12.203658 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-public-tls-certs\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.204738 master-0 kubenswrapper[33013]: I0313 11:14:12.203690 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-internal-tls-certs\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.204738 master-0 kubenswrapper[33013]: I0313 11:14:12.203712 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-etc-podinfo\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.204738 master-0 kubenswrapper[33013]: I0313 11:14:12.203736 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-logs\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.204738 master-0 kubenswrapper[33013]: I0313 11:14:12.203845 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-scripts\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.204738 master-0 kubenswrapper[33013]: I0313 11:14:12.203871 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmdvs\" (UniqueName: \"kubernetes.io/projected/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-kube-api-access-lmdvs\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.206981 master-0 kubenswrapper[33013]: I0313 11:14:12.205838 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "616c0183-dbe2-4f18-8391-0d1772b7f375" (UID: "616c0183-dbe2-4f18-8391-0d1772b7f375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:12.275394 master-0 kubenswrapper[33013]: I0313 11:14:12.275286 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-config-data" (OuterVolumeSpecName: "config-data") pod "616c0183-dbe2-4f18-8391-0d1772b7f375" (UID: "616c0183-dbe2-4f18-8391-0d1772b7f375"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:12.309256 master-0 kubenswrapper[33013]: I0313 11:14:12.305725 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-internal-tls-certs\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.309256 master-0 kubenswrapper[33013]: I0313 11:14:12.306443 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-etc-podinfo\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.309256 master-0 kubenswrapper[33013]: I0313 11:14:12.306477 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-logs\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.309256 master-0 kubenswrapper[33013]: I0313 11:14:12.306566 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-scripts\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.309256 master-0 kubenswrapper[33013]: I0313 11:14:12.306691 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmdvs\" (UniqueName: \"kubernetes.io/projected/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-kube-api-access-lmdvs\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.309256 master-0 kubenswrapper[33013]: I0313 11:14:12.306770 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-config-data-merged\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.309256 master-0 kubenswrapper[33013]: I0313 11:14:12.306810 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-config-data-custom\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.309256 master-0 kubenswrapper[33013]: I0313 11:14:12.306835 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-config-data\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.309256 master-0 kubenswrapper[33013]: I0313 11:14:12.306860 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-combined-ca-bundle\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.309256 master-0 kubenswrapper[33013]: I0313 11:14:12.306882 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-public-tls-certs\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.309256 master-0 kubenswrapper[33013]: I0313 11:14:12.306977 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:12.309256 master-0 kubenswrapper[33013]: I0313 11:14:12.306996 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/616c0183-dbe2-4f18-8391-0d1772b7f375-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:12.311118 master-0 kubenswrapper[33013]: I0313 11:14:12.309978 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-logs\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.315452 master-0 kubenswrapper[33013]: I0313 11:14:12.311524 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-public-tls-certs\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.315452 master-0 kubenswrapper[33013]: I0313 11:14:12.311772 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-config-data-merged\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.315452 master-0 kubenswrapper[33013]: I0313 11:14:12.314300 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-etc-podinfo\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.326958 master-0 kubenswrapper[33013]: I0313 11:14:12.322441 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-config-data\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.326958 master-0 kubenswrapper[33013]: I0313 11:14:12.324891 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-config-data-custom\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.326958 master-0 kubenswrapper[33013]: I0313 11:14:12.326157 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-scripts\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.337603 master-0 kubenswrapper[33013]: I0313 11:14:12.328025 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-internal-tls-certs\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.337603 master-0 kubenswrapper[33013]: I0313 11:14:12.328121 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-combined-ca-bundle\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.337603 master-0 kubenswrapper[33013]: I0313 11:14:12.334711 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmdvs\" (UniqueName: \"kubernetes.io/projected/6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c-kube-api-access-lmdvs\") pod \"ironic-5987cf94cc-zcxvf\" (UID: \"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c\") " pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.469696 master-0 kubenswrapper[33013]: I0313 11:14:12.466909 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:12.491757 master-0 kubenswrapper[33013]: I0313 11:14:12.485684 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ceac4-scheduler-0"] Mar 13 11:14:12.558763 master-0 kubenswrapper[33013]: I0313 11:14:12.541350 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ceac4-scheduler-0"] Mar 13 11:14:12.571771 master-0 kubenswrapper[33013]: I0313 11:14:12.571680 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ceac4-scheduler-0"] Mar 13 11:14:12.574253 master-0 kubenswrapper[33013]: I0313 11:14:12.574211 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.576985 master-0 kubenswrapper[33013]: I0313 11:14:12.576945 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ceac4-scheduler-config-data" Mar 13 11:14:12.583258 master-0 kubenswrapper[33013]: I0313 11:14:12.583185 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-scheduler-0"] Mar 13 11:14:12.715697 master-0 kubenswrapper[33013]: I0313 11:14:12.714882 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dbfdad2d-8dce-47a6-9981-b7d7b984db88-etc-machine-id\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.728228 master-0 kubenswrapper[33013]: I0313 11:14:12.716325 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfdad2d-8dce-47a6-9981-b7d7b984db88-combined-ca-bundle\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.728228 master-0 kubenswrapper[33013]: I0313 11:14:12.716372 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbfdad2d-8dce-47a6-9981-b7d7b984db88-config-data-custom\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.728228 master-0 kubenswrapper[33013]: I0313 11:14:12.716613 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbfdad2d-8dce-47a6-9981-b7d7b984db88-config-data\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.728228 master-0 kubenswrapper[33013]: I0313 11:14:12.716893 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdjkf\" (UniqueName: \"kubernetes.io/projected/dbfdad2d-8dce-47a6-9981-b7d7b984db88-kube-api-access-mdjkf\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.728228 master-0 kubenswrapper[33013]: I0313 11:14:12.717181 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbfdad2d-8dce-47a6-9981-b7d7b984db88-scripts\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.735720 master-0 kubenswrapper[33013]: I0313 11:14:12.735647 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="616c0183-dbe2-4f18-8391-0d1772b7f375" path="/var/lib/kubelet/pods/616c0183-dbe2-4f18-8391-0d1772b7f375/volumes" Mar 13 11:14:12.823733 master-0 kubenswrapper[33013]: I0313 11:14:12.823667 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dbfdad2d-8dce-47a6-9981-b7d7b984db88-etc-machine-id\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.824301 master-0 kubenswrapper[33013]: I0313 11:14:12.823765 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfdad2d-8dce-47a6-9981-b7d7b984db88-combined-ca-bundle\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.824301 master-0 kubenswrapper[33013]: I0313 11:14:12.823790 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbfdad2d-8dce-47a6-9981-b7d7b984db88-config-data-custom\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.824301 master-0 kubenswrapper[33013]: I0313 11:14:12.823849 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbfdad2d-8dce-47a6-9981-b7d7b984db88-config-data\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.824301 master-0 kubenswrapper[33013]: I0313 11:14:12.823916 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdjkf\" (UniqueName: \"kubernetes.io/projected/dbfdad2d-8dce-47a6-9981-b7d7b984db88-kube-api-access-mdjkf\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.824301 master-0 kubenswrapper[33013]: I0313 11:14:12.824026 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbfdad2d-8dce-47a6-9981-b7d7b984db88-scripts\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.828407 master-0 kubenswrapper[33013]: I0313 11:14:12.828338 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfdad2d-8dce-47a6-9981-b7d7b984db88-combined-ca-bundle\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.828496 master-0 kubenswrapper[33013]: I0313 11:14:12.828445 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dbfdad2d-8dce-47a6-9981-b7d7b984db88-etc-machine-id\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.831030 master-0 kubenswrapper[33013]: I0313 11:14:12.830982 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbfdad2d-8dce-47a6-9981-b7d7b984db88-config-data-custom\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.841020 master-0 kubenswrapper[33013]: I0313 11:14:12.840957 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbfdad2d-8dce-47a6-9981-b7d7b984db88-scripts\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.849618 master-0 kubenswrapper[33013]: I0313 11:14:12.849010 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbfdad2d-8dce-47a6-9981-b7d7b984db88-config-data\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.852611 master-0 kubenswrapper[33013]: I0313 11:14:12.852539 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdjkf\" (UniqueName: \"kubernetes.io/projected/dbfdad2d-8dce-47a6-9981-b7d7b984db88-kube-api-access-mdjkf\") pod \"cinder-ceac4-scheduler-0\" (UID: \"dbfdad2d-8dce-47a6-9981-b7d7b984db88\") " pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:12.917108 master-0 kubenswrapper[33013]: I0313 11:14:12.917051 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:13.005707 master-0 kubenswrapper[33013]: I0313 11:14:13.002454 33013 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-f9957b47c-z6jkk" podUID="deda895d-6ed0-4306-85ad-3ea788b9d709" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.226:5353: i/o timeout" Mar 13 11:14:13.421998 master-0 kubenswrapper[33013]: I0313 11:14:13.421940 33013 scope.go:117] "RemoveContainer" containerID="db9540b84c4bcd5308f09f6b463baf2bfbd12f01b6005000f33352eec609eaab" Mar 13 11:14:13.722569 master-0 kubenswrapper[33013]: I0313 11:14:13.722529 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-glgrm" Mar 13 11:14:13.764330 master-0 kubenswrapper[33013]: I0313 11:14:13.761502 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-4eb2-account-create-update-kgbrq" Mar 13 11:14:13.844344 master-0 kubenswrapper[33013]: I0313 11:14:13.844254 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-glgrm" event={"ID":"92cce121-c716-4f17-8c76-edd30dec3d3b","Type":"ContainerDied","Data":"dbad163bbdecff94b53c4f1d01c1ad80eac45c0da79c439694737c1cb682bf97"} Mar 13 11:14:13.844344 master-0 kubenswrapper[33013]: I0313 11:14:13.844299 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbad163bbdecff94b53c4f1d01c1ad80eac45c0da79c439694737c1cb682bf97" Mar 13 11:14:13.844874 master-0 kubenswrapper[33013]: I0313 11:14:13.844351 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-glgrm" Mar 13 11:14:13.846802 master-0 kubenswrapper[33013]: I0313 11:14:13.846777 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-4eb2-account-create-update-kgbrq" event={"ID":"b3c23adf-bb65-4b0e-a687-c314205c0be8","Type":"ContainerDied","Data":"83c0c5281421cce910620397878775d3f79baf918898deb0672efcb84351dc92"} Mar 13 11:14:13.846863 master-0 kubenswrapper[33013]: I0313 11:14:13.846803 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83c0c5281421cce910620397878775d3f79baf918898deb0672efcb84351dc92" Mar 13 11:14:13.846863 master-0 kubenswrapper[33013]: I0313 11:14:13.846839 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-4eb2-account-create-update-kgbrq" Mar 13 11:14:13.857361 master-0 kubenswrapper[33013]: I0313 11:14:13.857282 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-227ss\" (UniqueName: \"kubernetes.io/projected/92cce121-c716-4f17-8c76-edd30dec3d3b-kube-api-access-227ss\") pod \"92cce121-c716-4f17-8c76-edd30dec3d3b\" (UID: \"92cce121-c716-4f17-8c76-edd30dec3d3b\") " Mar 13 11:14:13.857645 master-0 kubenswrapper[33013]: I0313 11:14:13.857622 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92cce121-c716-4f17-8c76-edd30dec3d3b-operator-scripts\") pod \"92cce121-c716-4f17-8c76-edd30dec3d3b\" (UID: \"92cce121-c716-4f17-8c76-edd30dec3d3b\") " Mar 13 11:14:13.859485 master-0 kubenswrapper[33013]: I0313 11:14:13.859456 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92cce121-c716-4f17-8c76-edd30dec3d3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92cce121-c716-4f17-8c76-edd30dec3d3b" (UID: "92cce121-c716-4f17-8c76-edd30dec3d3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:13.867832 master-0 kubenswrapper[33013]: I0313 11:14:13.865524 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92cce121-c716-4f17-8c76-edd30dec3d3b-kube-api-access-227ss" (OuterVolumeSpecName: "kube-api-access-227ss") pod "92cce121-c716-4f17-8c76-edd30dec3d3b" (UID: "92cce121-c716-4f17-8c76-edd30dec3d3b"). InnerVolumeSpecName "kube-api-access-227ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:13.964158 master-0 kubenswrapper[33013]: I0313 11:14:13.963361 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3c23adf-bb65-4b0e-a687-c314205c0be8-operator-scripts\") pod \"b3c23adf-bb65-4b0e-a687-c314205c0be8\" (UID: \"b3c23adf-bb65-4b0e-a687-c314205c0be8\") " Mar 13 11:14:13.964158 master-0 kubenswrapper[33013]: I0313 11:14:13.963634 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57hp9\" (UniqueName: \"kubernetes.io/projected/b3c23adf-bb65-4b0e-a687-c314205c0be8-kube-api-access-57hp9\") pod \"b3c23adf-bb65-4b0e-a687-c314205c0be8\" (UID: \"b3c23adf-bb65-4b0e-a687-c314205c0be8\") " Mar 13 11:14:13.964406 master-0 kubenswrapper[33013]: I0313 11:14:13.964373 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92cce121-c716-4f17-8c76-edd30dec3d3b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:13.964406 master-0 kubenswrapper[33013]: I0313 11:14:13.964397 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-227ss\" (UniqueName: \"kubernetes.io/projected/92cce121-c716-4f17-8c76-edd30dec3d3b-kube-api-access-227ss\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:13.968401 master-0 kubenswrapper[33013]: I0313 11:14:13.968335 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c23adf-bb65-4b0e-a687-c314205c0be8-kube-api-access-57hp9" (OuterVolumeSpecName: "kube-api-access-57hp9") pod "b3c23adf-bb65-4b0e-a687-c314205c0be8" (UID: "b3c23adf-bb65-4b0e-a687-c314205c0be8"). InnerVolumeSpecName "kube-api-access-57hp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:13.977693 master-0 kubenswrapper[33013]: I0313 11:14:13.977598 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3c23adf-bb65-4b0e-a687-c314205c0be8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3c23adf-bb65-4b0e-a687-c314205c0be8" (UID: "b3c23adf-bb65-4b0e-a687-c314205c0be8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:14.066850 master-0 kubenswrapper[33013]: I0313 11:14:14.066802 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3c23adf-bb65-4b0e-a687-c314205c0be8-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.067392 master-0 kubenswrapper[33013]: I0313 11:14:14.066856 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57hp9\" (UniqueName: \"kubernetes.io/projected/b3c23adf-bb65-4b0e-a687-c314205c0be8-kube-api-access-57hp9\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.256806 master-0 kubenswrapper[33013]: I0313 11:14:14.256437 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:14.260825 master-0 kubenswrapper[33013]: I0313 11:14:14.260801 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:14.374173 master-0 kubenswrapper[33013]: I0313 11:14:14.374019 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-config-data-custom\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.375169 master-0 kubenswrapper[33013]: I0313 11:14:14.375144 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6xp2\" (UniqueName: \"kubernetes.io/projected/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-kube-api-access-c6xp2\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.375369 master-0 kubenswrapper[33013]: I0313 11:14:14.375350 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-machine-id\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.375489 master-0 kubenswrapper[33013]: I0313 11:14:14.375472 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-lib-cinder\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.375698 master-0 kubenswrapper[33013]: I0313 11:14:14.375677 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-run\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.375875 master-0 kubenswrapper[33013]: I0313 11:14:14.375855 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-config-data-custom\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.376583 master-0 kubenswrapper[33013]: I0313 11:14:14.376564 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-nvme\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.376739 master-0 kubenswrapper[33013]: I0313 11:14:14.376721 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-machine-id\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.376852 master-0 kubenswrapper[33013]: I0313 11:14:14.376832 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-locks-brick\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.377021 master-0 kubenswrapper[33013]: I0313 11:14:14.377003 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-iscsi\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.377126 master-0 kubenswrapper[33013]: I0313 11:14:14.377110 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-locks-cinder\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.377234 master-0 kubenswrapper[33013]: I0313 11:14:14.377216 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-locks-cinder\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.377339 master-0 kubenswrapper[33013]: I0313 11:14:14.377318 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-combined-ca-bundle\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.377434 master-0 kubenswrapper[33013]: I0313 11:14:14.377418 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-iscsi\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.385753 master-0 kubenswrapper[33013]: I0313 11:14:14.376128 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.385753 master-0 kubenswrapper[33013]: I0313 11:14:14.382512 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.385753 master-0 kubenswrapper[33013]: I0313 11:14:14.382558 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-run" (OuterVolumeSpecName: "run") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.385753 master-0 kubenswrapper[33013]: I0313 11:14:14.384392 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.385753 master-0 kubenswrapper[33013]: I0313 11:14:14.384514 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.385753 master-0 kubenswrapper[33013]: I0313 11:14:14.384543 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.385753 master-0 kubenswrapper[33013]: I0313 11:14:14.384573 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.385753 master-0 kubenswrapper[33013]: I0313 11:14:14.384618 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.385753 master-0 kubenswrapper[33013]: I0313 11:14:14.385058 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.398271 master-0 kubenswrapper[33013]: I0313 11:14:14.388332 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:14.398271 master-0 kubenswrapper[33013]: I0313 11:14:14.388475 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.401298 master-0 kubenswrapper[33013]: I0313 11:14:14.401216 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-kube-api-access-c6xp2" (OuterVolumeSpecName: "kube-api-access-c6xp2") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "kube-api-access-c6xp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:14.401580 master-0 kubenswrapper[33013]: I0313 11:14:14.401552 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:14.401765 master-0 kubenswrapper[33013]: I0313 11:14:14.401740 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-run\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.401902 master-0 kubenswrapper[33013]: I0313 11:14:14.401883 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-dev\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.402064 master-0 kubenswrapper[33013]: I0313 11:14:14.402044 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-lib-modules\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.402170 master-0 kubenswrapper[33013]: I0313 11:14:14.402149 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-sys\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.402311 master-0 kubenswrapper[33013]: I0313 11:14:14.402293 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-nvme\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.402470 master-0 kubenswrapper[33013]: I0313 11:14:14.402448 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-lib-modules\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.402618 master-0 kubenswrapper[33013]: I0313 11:14:14.402580 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-scripts\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.412744 master-0 kubenswrapper[33013]: I0313 11:14:14.412702 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-config-data\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.413325 master-0 kubenswrapper[33013]: I0313 11:14:14.413305 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-scripts\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.414599 master-0 kubenswrapper[33013]: I0313 11:14:14.414563 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-config-data\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.414750 master-0 kubenswrapper[33013]: I0313 11:14:14.414727 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-combined-ca-bundle\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.414878 master-0 kubenswrapper[33013]: I0313 11:14:14.414860 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-dev\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.414986 master-0 kubenswrapper[33013]: I0313 11:14:14.414967 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-lib-cinder\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.415117 master-0 kubenswrapper[33013]: I0313 11:14:14.415098 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-sys\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.415235 master-0 kubenswrapper[33013]: I0313 11:14:14.415214 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp89h\" (UniqueName: \"kubernetes.io/projected/b4d9d572-4830-4c6f-aa61-e836816ec94b-kube-api-access-bp89h\") pod \"b4d9d572-4830-4c6f-aa61-e836816ec94b\" (UID: \"b4d9d572-4830-4c6f-aa61-e836816ec94b\") " Mar 13 11:14:14.415359 master-0 kubenswrapper[33013]: I0313 11:14:14.402646 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.415455 master-0 kubenswrapper[33013]: I0313 11:14:14.402693 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-run" (OuterVolumeSpecName: "run") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.415455 master-0 kubenswrapper[33013]: I0313 11:14:14.402715 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-dev" (OuterVolumeSpecName: "dev") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.415455 master-0 kubenswrapper[33013]: I0313 11:14:14.402743 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.415455 master-0 kubenswrapper[33013]: I0313 11:14:14.402764 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-sys" (OuterVolumeSpecName: "sys") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.415455 master-0 kubenswrapper[33013]: I0313 11:14:14.402788 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.415455 master-0 kubenswrapper[33013]: I0313 11:14:14.408989 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-scripts" (OuterVolumeSpecName: "scripts") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:14.415455 master-0 kubenswrapper[33013]: I0313 11:14:14.415237 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-dev" (OuterVolumeSpecName: "dev") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.415455 master-0 kubenswrapper[33013]: I0313 11:14:14.415282 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-sys" (OuterVolumeSpecName: "sys") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.415455 master-0 kubenswrapper[33013]: I0313 11:14:14.415300 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.415455 master-0 kubenswrapper[33013]: I0313 11:14:14.415322 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-locks-brick\") pod \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\" (UID: \"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8\") " Mar 13 11:14:14.416241 master-0 kubenswrapper[33013]: I0313 11:14:14.416211 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 11:14:14.429281 master-0 kubenswrapper[33013]: I0313 11:14:14.429209 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d9d572-4830-4c6f-aa61-e836816ec94b-kube-api-access-bp89h" (OuterVolumeSpecName: "kube-api-access-bp89h") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "kube-api-access-bp89h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:14.430656 master-0 kubenswrapper[33013]: I0313 11:14:14.430561 33013 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.430850 master-0 kubenswrapper[33013]: I0313 11:14:14.430832 33013 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-run\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.430989 master-0 kubenswrapper[33013]: I0313 11:14:14.430974 33013 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.431094 master-0 kubenswrapper[33013]: I0313 11:14:14.431078 33013 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.431257 master-0 kubenswrapper[33013]: I0313 11:14:14.431243 33013 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.431359 master-0 kubenswrapper[33013]: I0313 11:14:14.431347 33013 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.431483 master-0 kubenswrapper[33013]: I0313 11:14:14.431452 33013 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.431698 master-0 kubenswrapper[33013]: I0313 11:14:14.431660 33013 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.431818 master-0 kubenswrapper[33013]: I0313 11:14:14.431803 33013 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.431953 master-0 kubenswrapper[33013]: I0313 11:14:14.431938 33013 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.432059 master-0 kubenswrapper[33013]: I0313 11:14:14.432045 33013 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-run\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.432320 master-0 kubenswrapper[33013]: I0313 11:14:14.432304 33013 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-dev\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.432496 master-0 kubenswrapper[33013]: I0313 11:14:14.432482 33013 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.432628 master-0 kubenswrapper[33013]: I0313 11:14:14.432612 33013 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-sys\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.432799 master-0 kubenswrapper[33013]: I0313 11:14:14.432782 33013 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b4d9d572-4830-4c6f-aa61-e836816ec94b-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.433002 master-0 kubenswrapper[33013]: I0313 11:14:14.432984 33013 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.433137 master-0 kubenswrapper[33013]: I0313 11:14:14.433123 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.434055 master-0 kubenswrapper[33013]: I0313 11:14:14.434037 33013 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-dev\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.434218 master-0 kubenswrapper[33013]: I0313 11:14:14.434198 33013 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.434365 master-0 kubenswrapper[33013]: I0313 11:14:14.434324 33013 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-sys\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.434481 master-0 kubenswrapper[33013]: I0313 11:14:14.434462 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp89h\" (UniqueName: \"kubernetes.io/projected/b4d9d572-4830-4c6f-aa61-e836816ec94b-kube-api-access-bp89h\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.434626 master-0 kubenswrapper[33013]: I0313 11:14:14.434584 33013 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.434753 master-0 kubenswrapper[33013]: I0313 11:14:14.434738 33013 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.434957 master-0 kubenswrapper[33013]: I0313 11:14:14.434917 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6xp2\" (UniqueName: \"kubernetes.io/projected/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-kube-api-access-c6xp2\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.435076 master-0 kubenswrapper[33013]: I0313 11:14:14.435059 33013 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.435411 master-0 kubenswrapper[33013]: I0313 11:14:14.435386 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-scripts" (OuterVolumeSpecName: "scripts") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:14.485291 master-0 kubenswrapper[33013]: I0313 11:14:14.485185 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5987cf94cc-zcxvf"] Mar 13 11:14:14.529377 master-0 kubenswrapper[33013]: I0313 11:14:14.528542 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-scheduler-0"] Mar 13 11:14:14.538170 master-0 kubenswrapper[33013]: I0313 11:14:14.538108 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.563765 master-0 kubenswrapper[33013]: I0313 11:14:14.563663 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:14.603098 master-0 kubenswrapper[33013]: I0313 11:14:14.602990 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 13 11:14:14.624637 master-0 kubenswrapper[33013]: W0313 11:14:14.624475 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode16baf7d_8440_4431_a184_523ae34f6e6f.slice/crio-ddde876151785cf4c0d12cd6ba7a3b9e9a5cbe7741dbb16447e5fb1a3affb833 WatchSource:0}: Error finding container ddde876151785cf4c0d12cd6ba7a3b9e9a5cbe7741dbb16447e5fb1a3affb833: Status 404 returned error can't find the container with id ddde876151785cf4c0d12cd6ba7a3b9e9a5cbe7741dbb16447e5fb1a3affb833 Mar 13 11:14:14.632751 master-0 kubenswrapper[33013]: I0313 11:14:14.632668 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:14.640785 master-0 kubenswrapper[33013]: I0313 11:14:14.640490 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.640785 master-0 kubenswrapper[33013]: I0313 11:14:14.640544 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:14.911704 master-0 kubenswrapper[33013]: I0313 11:14:14.902813 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-backup-0" event={"ID":"f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8","Type":"ContainerDied","Data":"b43285cc6e42a3616d43badbd9e621a1e7db93d21c4001ad35764f85c9a54596"} Mar 13 11:14:14.911704 master-0 kubenswrapper[33013]: I0313 11:14:14.903066 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:14.911704 master-0 kubenswrapper[33013]: I0313 11:14:14.903067 33013 scope.go:117] "RemoveContainer" containerID="f4afc239bfc8e18d4e3ef40f2181f5ebe74b92387a5f498b902a27284eba8bf3" Mar 13 11:14:14.921627 master-0 kubenswrapper[33013]: I0313 11:14:14.920820 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e16baf7d-8440-4431-a184-523ae34f6e6f","Type":"ContainerStarted","Data":"ddde876151785cf4c0d12cd6ba7a3b9e9a5cbe7741dbb16447e5fb1a3affb833"} Mar 13 11:14:14.933427 master-0 kubenswrapper[33013]: I0313 11:14:14.932707 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5987cf94cc-zcxvf" event={"ID":"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c","Type":"ContainerStarted","Data":"dc8a37e309bd215c01e28430226fd579f0b087519af37649d542731596dec643"} Mar 13 11:14:14.983706 master-0 kubenswrapper[33013]: I0313 11:14:14.978843 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b494447c-sl7nf" event={"ID":"69fa6a94-1b94-44ad-b7b3-5294d3f76e57","Type":"ContainerStarted","Data":"c87ada79b3b2ed28b4ccbec15f51a2270514a36795fad9a27b65f22df6634622"} Mar 13 11:14:14.983706 master-0 kubenswrapper[33013]: I0313 11:14:14.980338 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:15.016994 master-0 kubenswrapper[33013]: I0313 11:14:15.000655 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" event={"ID":"b4d9d572-4830-4c6f-aa61-e836816ec94b","Type":"ContainerDied","Data":"bc8d0a9fff51eb4044acdcdfcb84483081392991400e48cd89d7b38c7525514c"} Mar 13 11:14:15.016994 master-0 kubenswrapper[33013]: I0313 11:14:15.000803 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:15.016994 master-0 kubenswrapper[33013]: I0313 11:14:15.004552 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-scheduler-0" event={"ID":"dbfdad2d-8dce-47a6-9981-b7d7b984db88","Type":"ContainerStarted","Data":"8abb68b8ba52a8baf43fe09eb5d6d993b01e6d1bd26be234e077649635d4fa3e"} Mar 13 11:14:15.016994 master-0 kubenswrapper[33013]: I0313 11:14:15.012481 33013 generic.go:334] "Generic (PLEG): container finished" podID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerID="711ee6050a695ebff3bfa2dbf73de3d552785202230091172158e03672ec0c6a" exitCode=0 Mar 13 11:14:15.016994 master-0 kubenswrapper[33013]: I0313 11:14:15.013688 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7657b6885c-5c572" event={"ID":"0341ff60-4819-448f-98f7-4ee8216d5d39","Type":"ContainerDied","Data":"711ee6050a695ebff3bfa2dbf73de3d552785202230091172158e03672ec0c6a"} Mar 13 11:14:15.058467 master-0 kubenswrapper[33013]: I0313 11:14:15.057203 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b494447c-sl7nf" podStartSLOduration=8.057183196 podStartE2EDuration="8.057183196s" podCreationTimestamp="2026-03-13 11:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:14:15.034694632 +0000 UTC m=+1038.510648001" watchObservedRunningTime="2026-03-13 11:14:15.057183196 +0000 UTC m=+1038.533136545" Mar 13 11:14:15.067055 master-0 kubenswrapper[33013]: I0313 11:14:15.066305 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-config-data" (OuterVolumeSpecName: "config-data") pod "b4d9d572-4830-4c6f-aa61-e836816ec94b" (UID: "b4d9d572-4830-4c6f-aa61-e836816ec94b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:15.084837 master-0 kubenswrapper[33013]: I0313 11:14:15.082973 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" event={"ID":"3b99f02d-f8e2-497b-b68b-8e445e7b7541","Type":"ContainerStarted","Data":"d28239b3ecad677258f0a8ab2dcbfcbf225e5fc929b3f5b896bdf17cf2238051"} Mar 13 11:14:15.084837 master-0 kubenswrapper[33013]: I0313 11:14:15.083819 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:15.104619 master-0 kubenswrapper[33013]: I0313 11:14:15.104231 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d9d572-4830-4c6f-aa61-e836816ec94b-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:15.147954 master-0 kubenswrapper[33013]: I0313 11:14:15.147874 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-config-data" (OuterVolumeSpecName: "config-data") pod "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" (UID: "f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:15.206926 master-0 kubenswrapper[33013]: I0313 11:14:15.206820 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" podStartSLOduration=4.254054729 podStartE2EDuration="9.206798567s" podCreationTimestamp="2026-03-13 11:14:06 +0000 UTC" firstStartedPulling="2026-03-13 11:14:08.601616697 +0000 UTC m=+1032.077570046" lastFinishedPulling="2026-03-13 11:14:13.554360535 +0000 UTC m=+1037.030313884" observedRunningTime="2026-03-13 11:14:15.127092506 +0000 UTC m=+1038.603045855" watchObservedRunningTime="2026-03-13 11:14:15.206798567 +0000 UTC m=+1038.682751916" Mar 13 11:14:15.222797 master-0 kubenswrapper[33013]: I0313 11:14:15.222752 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:15.360015 master-0 kubenswrapper[33013]: I0313 11:14:15.354736 33013 scope.go:117] "RemoveContainer" containerID="1dce2530df1b96681748551a56a0d2149e701f021d5e492b94854bf68b6cb561" Mar 13 11:14:15.431638 master-0 kubenswrapper[33013]: I0313 11:14:15.428230 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ceac4-backup-0"] Mar 13 11:14:15.446630 master-0 kubenswrapper[33013]: I0313 11:14:15.442766 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ceac4-backup-0"] Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: I0313 11:14:15.462707 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ceac4-backup-0"] Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: E0313 11:14:15.463388 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d9d572-4830-4c6f-aa61-e836816ec94b" containerName="cinder-volume" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: I0313 11:14:15.463411 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d9d572-4830-4c6f-aa61-e836816ec94b" containerName="cinder-volume" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: E0313 11:14:15.463450 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c23adf-bb65-4b0e-a687-c314205c0be8" containerName="mariadb-account-create-update" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: I0313 11:14:15.463461 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c23adf-bb65-4b0e-a687-c314205c0be8" containerName="mariadb-account-create-update" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: E0313 11:14:15.463480 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" containerName="probe" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: I0313 11:14:15.463488 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" containerName="probe" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: E0313 11:14:15.463531 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92cce121-c716-4f17-8c76-edd30dec3d3b" containerName="mariadb-database-create" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: I0313 11:14:15.463540 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="92cce121-c716-4f17-8c76-edd30dec3d3b" containerName="mariadb-database-create" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: E0313 11:14:15.463560 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" containerName="cinder-backup" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: I0313 11:14:15.463568 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" containerName="cinder-backup" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: E0313 11:14:15.463616 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d9d572-4830-4c6f-aa61-e836816ec94b" containerName="probe" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: I0313 11:14:15.463624 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d9d572-4830-4c6f-aa61-e836816ec94b" containerName="probe" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: I0313 11:14:15.463846 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" containerName="probe" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: I0313 11:14:15.463867 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c23adf-bb65-4b0e-a687-c314205c0be8" containerName="mariadb-account-create-update" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: I0313 11:14:15.463887 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="92cce121-c716-4f17-8c76-edd30dec3d3b" containerName="mariadb-database-create" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: I0313 11:14:15.463899 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" containerName="cinder-backup" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: I0313 11:14:15.463909 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d9d572-4830-4c6f-aa61-e836816ec94b" containerName="probe" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: I0313 11:14:15.463941 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d9d572-4830-4c6f-aa61-e836816ec94b" containerName="cinder-volume" Mar 13 11:14:15.477628 master-0 kubenswrapper[33013]: I0313 11:14:15.465146 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.490263 master-0 kubenswrapper[33013]: I0313 11:14:15.484343 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-backup-0"] Mar 13 11:14:15.490263 master-0 kubenswrapper[33013]: I0313 11:14:15.487007 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ceac4-backup-config-data" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.637842 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3dc21b-6425-45c1-a7f8-56336972b4ca-config-data\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.637958 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-lib-modules\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.637986 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3dc21b-6425-45c1-a7f8-56336972b4ca-combined-ca-bundle\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.638008 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-etc-machine-id\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.638030 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwkfc\" (UniqueName: \"kubernetes.io/projected/3c3dc21b-6425-45c1-a7f8-56336972b4ca-kube-api-access-zwkfc\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.638055 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-etc-nvme\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.638081 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-var-lib-cinder\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.638124 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-var-locks-brick\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.638148 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-dev\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.638179 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-etc-iscsi\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.638244 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c3dc21b-6425-45c1-a7f8-56336972b4ca-config-data-custom\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.638373 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-var-locks-cinder\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.638440 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3dc21b-6425-45c1-a7f8-56336972b4ca-scripts\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.638461 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-run\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.641725 master-0 kubenswrapper[33013]: I0313 11:14:15.638490 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-sys\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.745064 master-0 kubenswrapper[33013]: I0313 11:14:15.744197 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-etc-nvme\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.745064 master-0 kubenswrapper[33013]: I0313 11:14:15.744260 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-var-lib-cinder\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.745064 master-0 kubenswrapper[33013]: I0313 11:14:15.744296 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-var-locks-brick\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.745064 master-0 kubenswrapper[33013]: I0313 11:14:15.744320 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-dev\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.745064 master-0 kubenswrapper[33013]: I0313 11:14:15.744340 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-etc-iscsi\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.745064 master-0 kubenswrapper[33013]: I0313 11:14:15.744391 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c3dc21b-6425-45c1-a7f8-56336972b4ca-config-data-custom\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.745064 master-0 kubenswrapper[33013]: I0313 11:14:15.744422 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-var-locks-cinder\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.745064 master-0 kubenswrapper[33013]: I0313 11:14:15.744686 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-etc-nvme\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.746938 master-0 kubenswrapper[33013]: I0313 11:14:15.746883 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3dc21b-6425-45c1-a7f8-56336972b4ca-scripts\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.747012 master-0 kubenswrapper[33013]: I0313 11:14:15.746957 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-run\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.747091 master-0 kubenswrapper[33013]: I0313 11:14:15.747038 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-sys\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.747231 master-0 kubenswrapper[33013]: I0313 11:14:15.747127 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3dc21b-6425-45c1-a7f8-56336972b4ca-config-data\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.747385 master-0 kubenswrapper[33013]: I0313 11:14:15.747350 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-lib-modules\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.747432 master-0 kubenswrapper[33013]: I0313 11:14:15.747395 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3dc21b-6425-45c1-a7f8-56336972b4ca-combined-ca-bundle\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.747564 master-0 kubenswrapper[33013]: I0313 11:14:15.747524 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-etc-machine-id\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.747659 master-0 kubenswrapper[33013]: I0313 11:14:15.747610 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwkfc\" (UniqueName: \"kubernetes.io/projected/3c3dc21b-6425-45c1-a7f8-56336972b4ca-kube-api-access-zwkfc\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.749321 master-0 kubenswrapper[33013]: I0313 11:14:15.749264 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-var-lib-cinder\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.749391 master-0 kubenswrapper[33013]: I0313 11:14:15.749329 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-var-locks-brick\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.749391 master-0 kubenswrapper[33013]: I0313 11:14:15.749364 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-dev\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.749452 master-0 kubenswrapper[33013]: I0313 11:14:15.749390 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-etc-iscsi\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.754454 master-0 kubenswrapper[33013]: I0313 11:14:15.754207 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-sys\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.754454 master-0 kubenswrapper[33013]: I0313 11:14:15.754335 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-var-locks-cinder\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.755777 master-0 kubenswrapper[33013]: I0313 11:14:15.755709 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-run\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.755841 master-0 kubenswrapper[33013]: I0313 11:14:15.755788 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-lib-modules\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.759303 master-0 kubenswrapper[33013]: I0313 11:14:15.757414 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c3dc21b-6425-45c1-a7f8-56336972b4ca-etc-machine-id\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.767179 master-0 kubenswrapper[33013]: I0313 11:14:15.767124 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3dc21b-6425-45c1-a7f8-56336972b4ca-combined-ca-bundle\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.782963 master-0 kubenswrapper[33013]: I0313 11:14:15.782902 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3dc21b-6425-45c1-a7f8-56336972b4ca-scripts\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.784618 master-0 kubenswrapper[33013]: I0313 11:14:15.784481 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c3dc21b-6425-45c1-a7f8-56336972b4ca-config-data-custom\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.787490 master-0 kubenswrapper[33013]: I0313 11:14:15.786906 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3dc21b-6425-45c1-a7f8-56336972b4ca-config-data\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:15.793365 master-0 kubenswrapper[33013]: I0313 11:14:15.793312 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwkfc\" (UniqueName: \"kubernetes.io/projected/3c3dc21b-6425-45c1-a7f8-56336972b4ca-kube-api-access-zwkfc\") pod \"cinder-ceac4-backup-0\" (UID: \"3c3dc21b-6425-45c1-a7f8-56336972b4ca\") " pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:16.123184 master-0 kubenswrapper[33013]: I0313 11:14:16.122895 33013 generic.go:334] "Generic (PLEG): container finished" podID="6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c" containerID="4fec2c1084ad455c355ffdae16e7f0ab51ba00a64686447c9612002002dacaf8" exitCode=0 Mar 13 11:14:16.123184 master-0 kubenswrapper[33013]: I0313 11:14:16.123045 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5987cf94cc-zcxvf" event={"ID":"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c","Type":"ContainerDied","Data":"4fec2c1084ad455c355ffdae16e7f0ab51ba00a64686447c9612002002dacaf8"} Mar 13 11:14:16.306800 master-0 kubenswrapper[33013]: I0313 11:14:16.306750 33013 scope.go:117] "RemoveContainer" containerID="8e5661b2a39e4da32a8d1b9522f5dc22eb5bdd6b17f475d0ac62686eedb5dcd8" Mar 13 11:14:16.458682 master-0 kubenswrapper[33013]: I0313 11:14:16.456418 33013 scope.go:117] "RemoveContainer" containerID="92136a8ac6376648da2845880d840c334bd57ffb3374dec57d9521620552252f" Mar 13 11:14:16.462608 master-0 kubenswrapper[33013]: I0313 11:14:16.461601 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:16.534177 master-0 kubenswrapper[33013]: I0313 11:14:16.532973 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ceac4-volume-lvm-iscsi-0"] Mar 13 11:14:16.566494 master-0 kubenswrapper[33013]: I0313 11:14:16.563710 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ceac4-volume-lvm-iscsi-0"] Mar 13 11:14:16.586085 master-0 kubenswrapper[33013]: I0313 11:14:16.578890 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ceac4-volume-lvm-iscsi-0"] Mar 13 11:14:16.597677 master-0 kubenswrapper[33013]: I0313 11:14:16.596449 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.608691 master-0 kubenswrapper[33013]: I0313 11:14:16.608222 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-ceac4-volume-lvm-iscsi-config-data" Mar 13 11:14:16.669567 master-0 kubenswrapper[33013]: I0313 11:14:16.669496 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-volume-lvm-iscsi-0"] Mar 13 11:14:16.722353 master-0 kubenswrapper[33013]: I0313 11:14:16.722100 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-etc-machine-id\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.722783 master-0 kubenswrapper[33013]: I0313 11:14:16.722694 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063dc6df-f168-42ac-835b-1002d40f6d55-config-data-custom\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.723067 master-0 kubenswrapper[33013]: I0313 11:14:16.723007 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-dev\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.723225 master-0 kubenswrapper[33013]: I0313 11:14:16.723176 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063dc6df-f168-42ac-835b-1002d40f6d55-combined-ca-bundle\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.723390 master-0 kubenswrapper[33013]: I0313 11:14:16.723322 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-var-locks-brick\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.723732 master-0 kubenswrapper[33013]: I0313 11:14:16.723528 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063dc6df-f168-42ac-835b-1002d40f6d55-scripts\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.724126 master-0 kubenswrapper[33013]: I0313 11:14:16.723984 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063dc6df-f168-42ac-835b-1002d40f6d55-config-data\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.724355 master-0 kubenswrapper[33013]: I0313 11:14:16.724239 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9jl6\" (UniqueName: \"kubernetes.io/projected/063dc6df-f168-42ac-835b-1002d40f6d55-kube-api-access-j9jl6\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.724522 master-0 kubenswrapper[33013]: I0313 11:14:16.724481 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-run\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.724817 master-0 kubenswrapper[33013]: I0313 11:14:16.724662 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-sys\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.725045 master-0 kubenswrapper[33013]: I0313 11:14:16.724981 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-etc-iscsi\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.725299 master-0 kubenswrapper[33013]: I0313 11:14:16.725234 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-var-locks-cinder\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.725466 master-0 kubenswrapper[33013]: I0313 11:14:16.725419 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-var-lib-cinder\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.729510 master-0 kubenswrapper[33013]: I0313 11:14:16.725576 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-lib-modules\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.732142 master-0 kubenswrapper[33013]: I0313 11:14:16.731840 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-etc-nvme\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.760867 master-0 kubenswrapper[33013]: I0313 11:14:16.760717 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d9d572-4830-4c6f-aa61-e836816ec94b" path="/var/lib/kubelet/pods/b4d9d572-4830-4c6f-aa61-e836816ec94b/volumes" Mar 13 11:14:16.776720 master-0 kubenswrapper[33013]: I0313 11:14:16.776655 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8" path="/var/lib/kubelet/pods/f56aaf16-8187-4f52-ad5c-2ddc6c6d2ad8/volumes" Mar 13 11:14:16.845164 master-0 kubenswrapper[33013]: I0313 11:14:16.845114 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063dc6df-f168-42ac-835b-1002d40f6d55-scripts\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.845484 master-0 kubenswrapper[33013]: I0313 11:14:16.845468 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063dc6df-f168-42ac-835b-1002d40f6d55-config-data\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.845659 master-0 kubenswrapper[33013]: I0313 11:14:16.845569 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9jl6\" (UniqueName: \"kubernetes.io/projected/063dc6df-f168-42ac-835b-1002d40f6d55-kube-api-access-j9jl6\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.845752 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-run\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.856808 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-sys\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.856851 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-etc-iscsi\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.856950 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-var-locks-cinder\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.856990 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-var-lib-cinder\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.857012 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-lib-modules\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.857095 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-etc-nvme\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.857402 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-etc-machine-id\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.857519 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063dc6df-f168-42ac-835b-1002d40f6d55-config-data-custom\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.857573 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-dev\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.857635 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063dc6df-f168-42ac-835b-1002d40f6d55-combined-ca-bundle\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.857673 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-var-locks-brick\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.857868 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-var-locks-brick\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.848828 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-run\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.857940 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-sys\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.857970 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-etc-iscsi\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.858012 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-var-locks-cinder\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.858068 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-var-lib-cinder\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.858101 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-lib-modules\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.858146 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-etc-nvme\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.858861 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-etc-machine-id\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.861632 master-0 kubenswrapper[33013]: I0313 11:14:16.859510 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/063dc6df-f168-42ac-835b-1002d40f6d55-dev\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.870316 master-0 kubenswrapper[33013]: I0313 11:14:16.870237 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063dc6df-f168-42ac-835b-1002d40f6d55-scripts\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.872280 master-0 kubenswrapper[33013]: I0313 11:14:16.872240 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063dc6df-f168-42ac-835b-1002d40f6d55-config-data-custom\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.874912 master-0 kubenswrapper[33013]: I0313 11:14:16.872884 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063dc6df-f168-42ac-835b-1002d40f6d55-config-data\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.874912 master-0 kubenswrapper[33013]: I0313 11:14:16.873379 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063dc6df-f168-42ac-835b-1002d40f6d55-combined-ca-bundle\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.894810 master-0 kubenswrapper[33013]: I0313 11:14:16.894766 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9jl6\" (UniqueName: \"kubernetes.io/projected/063dc6df-f168-42ac-835b-1002d40f6d55-kube-api-access-j9jl6\") pod \"cinder-ceac4-volume-lvm-iscsi-0\" (UID: \"063dc6df-f168-42ac-835b-1002d40f6d55\") " pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:16.957848 master-0 kubenswrapper[33013]: I0313 11:14:16.957303 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:14:17.135007 master-0 kubenswrapper[33013]: I0313 11:14:17.134851 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:14:17.161206 master-0 kubenswrapper[33013]: I0313 11:14:17.160770 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7657b6885c-5c572" event={"ID":"0341ff60-4819-448f-98f7-4ee8216d5d39","Type":"ContainerStarted","Data":"67449c2217974697072bb81c294df0ce3c706dedfcc619e89cdbce71bd4a721f"} Mar 13 11:14:17.191201 master-0 kubenswrapper[33013]: I0313 11:14:17.190093 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5987cf94cc-zcxvf" event={"ID":"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c","Type":"ContainerStarted","Data":"afd7d4edcbf9c0eda5cca6734d9e94c3690436d22e0f2fa9009a6ae2f31f9275"} Mar 13 11:14:17.365560 master-0 kubenswrapper[33013]: W0313 11:14:17.359503 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c3dc21b_6425_45c1_a7f8_56336972b4ca.slice/crio-3c925e0104f208d86b2be4f28dda9bfe319c5adc3b69ee41be2759c6aa9602f1 WatchSource:0}: Error finding container 3c925e0104f208d86b2be4f28dda9bfe319c5adc3b69ee41be2759c6aa9602f1: Status 404 returned error can't find the container with id 3c925e0104f208d86b2be4f28dda9bfe319c5adc3b69ee41be2759c6aa9602f1 Mar 13 11:14:17.381365 master-0 kubenswrapper[33013]: I0313 11:14:17.377394 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-backup-0"] Mar 13 11:14:17.381365 master-0 kubenswrapper[33013]: I0313 11:14:17.381313 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-56f94cc46b-p5gzb" Mar 13 11:14:17.704308 master-0 kubenswrapper[33013]: I0313 11:14:17.704261 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:17.728808 master-0 kubenswrapper[33013]: I0313 11:14:17.725958 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:18.005240 master-0 kubenswrapper[33013]: I0313 11:14:18.002532 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59958fcccb-m5c9g" Mar 13 11:14:18.086759 master-0 kubenswrapper[33013]: I0313 11:14:18.086503 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:14:18.180614 master-0 kubenswrapper[33013]: I0313 11:14:18.174855 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-686c8b6b46-vlmv5"] Mar 13 11:14:18.180614 master-0 kubenswrapper[33013]: I0313 11:14:18.175012 33013 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 11:14:18.180614 master-0 kubenswrapper[33013]: I0313 11:14:18.175150 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-686c8b6b46-vlmv5" podUID="3630054b-4002-4cdc-b667-ad4cece7b207" containerName="placement-log" containerID="cri-o://fcc5a610adc6d0d7a7fca8489f5839daa731e26ad101cd880b65e8a404682502" gracePeriod=30 Mar 13 11:14:18.180614 master-0 kubenswrapper[33013]: I0313 11:14:18.175716 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-686c8b6b46-vlmv5" podUID="3630054b-4002-4cdc-b667-ad4cece7b207" containerName="placement-api" containerID="cri-o://881f2d9d76ce2aa5c00a7f7ec95f29b43793e7ac713363e8fbfb81ad7dfd2f40" gracePeriod=30 Mar 13 11:14:18.215550 master-0 kubenswrapper[33013]: I0313 11:14:18.215133 33013 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-686c8b6b46-vlmv5" podUID="3630054b-4002-4cdc-b667-ad4cece7b207" containerName="placement-log" probeResult="failure" output="Get \"https://10.128.0.219:8778/\": EOF" Mar 13 11:14:18.231641 master-0 kubenswrapper[33013]: I0313 11:14:18.231396 33013 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-686c8b6b46-vlmv5" podUID="3630054b-4002-4cdc-b667-ad4cece7b207" containerName="placement-log" probeResult="failure" output="Get \"https://10.128.0.219:8778/\": EOF" Mar 13 11:14:18.351990 master-0 kubenswrapper[33013]: I0313 11:14:18.351903 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc5fdb9b9-vmzb7"] Mar 13 11:14:18.352410 master-0 kubenswrapper[33013]: I0313 11:14:18.352370 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" podUID="e5137299-9cd3-46e0-9689-4416b06029db" containerName="dnsmasq-dns" containerID="cri-o://a5e2bfc5e6a076e2ec2afcf0d059532ea53005041d4e5c7f2d32740bd0be3c66" gracePeriod=10 Mar 13 11:14:18.400691 master-0 kubenswrapper[33013]: I0313 11:14:18.399033 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-backup-0" event={"ID":"3c3dc21b-6425-45c1-a7f8-56336972b4ca","Type":"ContainerStarted","Data":"3ca59d98321a0dca30a81e5469de586b706b73695c15fde21c0066fe3d086236"} Mar 13 11:14:18.400691 master-0 kubenswrapper[33013]: I0313 11:14:18.399114 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-backup-0" event={"ID":"3c3dc21b-6425-45c1-a7f8-56336972b4ca","Type":"ContainerStarted","Data":"3c925e0104f208d86b2be4f28dda9bfe319c5adc3b69ee41be2759c6aa9602f1"} Mar 13 11:14:18.508713 master-0 kubenswrapper[33013]: I0313 11:14:18.508654 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7657b6885c-5c572" event={"ID":"0341ff60-4819-448f-98f7-4ee8216d5d39","Type":"ContainerStarted","Data":"f455deb8a2fd5ae2c7a38c794095f7506f9328ddba80c2a522a2d9b4940c1bf1"} Mar 13 11:14:18.509000 master-0 kubenswrapper[33013]: I0313 11:14:18.508974 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:18.513398 master-0 kubenswrapper[33013]: I0313 11:14:18.513346 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 13 11:14:18.522925 master-0 kubenswrapper[33013]: I0313 11:14:18.522880 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 11:14:18.535382 master-0 kubenswrapper[33013]: I0313 11:14:18.535306 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 13 11:14:18.537766 master-0 kubenswrapper[33013]: I0313 11:14:18.535346 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 13 11:14:18.575134 master-0 kubenswrapper[33013]: I0313 11:14:18.575087 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e16baf7d-8440-4431-a184-523ae34f6e6f","Type":"ContainerStarted","Data":"4716c1978bba1d73df9a4cb2e6c28cdd15ac64a07b384a240650da50f863d580"} Mar 13 11:14:18.632192 master-0 kubenswrapper[33013]: E0313 11:14:18.632101 33013 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3630054b_4002_4cdc_b667_ad4cece7b207.slice/crio-fcc5a610adc6d0d7a7fca8489f5839daa731e26ad101cd880b65e8a404682502.scope\": RecentStats: unable to find data in memory cache]" Mar 13 11:14:18.658661 master-0 kubenswrapper[33013]: I0313 11:14:18.658619 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-scheduler-0" event={"ID":"dbfdad2d-8dce-47a6-9981-b7d7b984db88","Type":"ContainerStarted","Data":"1374dd69acd3232b41bdc9bbdc15cc1a72b92de8da1033305d907c973663c112"} Mar 13 11:14:18.665773 master-0 kubenswrapper[33013]: I0313 11:14:18.665680 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/62dc5405-5c84-43a7-9b0d-400716bf7ab4-openstack-config-secret\") pod \"openstackclient\" (UID: \"62dc5405-5c84-43a7-9b0d-400716bf7ab4\") " pod="openstack/openstackclient" Mar 13 11:14:18.666233 master-0 kubenswrapper[33013]: I0313 11:14:18.666213 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/62dc5405-5c84-43a7-9b0d-400716bf7ab4-openstack-config\") pod \"openstackclient\" (UID: \"62dc5405-5c84-43a7-9b0d-400716bf7ab4\") " pod="openstack/openstackclient" Mar 13 11:14:18.683316 master-0 kubenswrapper[33013]: I0313 11:14:18.666624 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dc5405-5c84-43a7-9b0d-400716bf7ab4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"62dc5405-5c84-43a7-9b0d-400716bf7ab4\") " pod="openstack/openstackclient" Mar 13 11:14:18.684505 master-0 kubenswrapper[33013]: I0313 11:14:18.667636 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 13 11:14:18.684733 master-0 kubenswrapper[33013]: I0313 11:14:18.680489 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-7657b6885c-5c572" podStartSLOduration=7.762115072 podStartE2EDuration="11.680461972s" podCreationTimestamp="2026-03-13 11:14:07 +0000 UTC" firstStartedPulling="2026-03-13 11:14:09.658352524 +0000 UTC m=+1033.134305873" lastFinishedPulling="2026-03-13 11:14:13.576699424 +0000 UTC m=+1037.052652773" observedRunningTime="2026-03-13 11:14:18.626524399 +0000 UTC m=+1042.102477748" watchObservedRunningTime="2026-03-13 11:14:18.680461972 +0000 UTC m=+1042.156415321" Mar 13 11:14:18.693041 master-0 kubenswrapper[33013]: I0313 11:14:18.690913 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h84kq\" (UniqueName: \"kubernetes.io/projected/62dc5405-5c84-43a7-9b0d-400716bf7ab4-kube-api-access-h84kq\") pod \"openstackclient\" (UID: \"62dc5405-5c84-43a7-9b0d-400716bf7ab4\") " pod="openstack/openstackclient" Mar 13 11:14:18.805144 master-0 kubenswrapper[33013]: I0313 11:14:18.805076 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/62dc5405-5c84-43a7-9b0d-400716bf7ab4-openstack-config\") pod \"openstackclient\" (UID: \"62dc5405-5c84-43a7-9b0d-400716bf7ab4\") " pod="openstack/openstackclient" Mar 13 11:14:18.805354 master-0 kubenswrapper[33013]: I0313 11:14:18.805328 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dc5405-5c84-43a7-9b0d-400716bf7ab4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"62dc5405-5c84-43a7-9b0d-400716bf7ab4\") " pod="openstack/openstackclient" Mar 13 11:14:18.805442 master-0 kubenswrapper[33013]: I0313 11:14:18.805400 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h84kq\" (UniqueName: \"kubernetes.io/projected/62dc5405-5c84-43a7-9b0d-400716bf7ab4-kube-api-access-h84kq\") pod \"openstackclient\" (UID: \"62dc5405-5c84-43a7-9b0d-400716bf7ab4\") " pod="openstack/openstackclient" Mar 13 11:14:18.805442 master-0 kubenswrapper[33013]: I0313 11:14:18.805428 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/62dc5405-5c84-43a7-9b0d-400716bf7ab4-openstack-config-secret\") pod \"openstackclient\" (UID: \"62dc5405-5c84-43a7-9b0d-400716bf7ab4\") " pod="openstack/openstackclient" Mar 13 11:14:18.816655 master-0 kubenswrapper[33013]: I0313 11:14:18.816599 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/62dc5405-5c84-43a7-9b0d-400716bf7ab4-openstack-config-secret\") pod \"openstackclient\" (UID: \"62dc5405-5c84-43a7-9b0d-400716bf7ab4\") " pod="openstack/openstackclient" Mar 13 11:14:18.827264 master-0 kubenswrapper[33013]: I0313 11:14:18.827220 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/62dc5405-5c84-43a7-9b0d-400716bf7ab4-openstack-config\") pod \"openstackclient\" (UID: \"62dc5405-5c84-43a7-9b0d-400716bf7ab4\") " pod="openstack/openstackclient" Mar 13 11:14:18.874695 master-0 kubenswrapper[33013]: I0313 11:14:18.874620 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h84kq\" (UniqueName: \"kubernetes.io/projected/62dc5405-5c84-43a7-9b0d-400716bf7ab4-kube-api-access-h84kq\") pod \"openstackclient\" (UID: \"62dc5405-5c84-43a7-9b0d-400716bf7ab4\") " pod="openstack/openstackclient" Mar 13 11:14:18.897097 master-0 kubenswrapper[33013]: I0313 11:14:18.896720 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dc5405-5c84-43a7-9b0d-400716bf7ab4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"62dc5405-5c84-43a7-9b0d-400716bf7ab4\") " pod="openstack/openstackclient" Mar 13 11:14:19.065220 master-0 kubenswrapper[33013]: I0313 11:14:19.065134 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceac4-volume-lvm-iscsi-0"] Mar 13 11:14:19.167979 master-0 kubenswrapper[33013]: I0313 11:14:19.167914 33013 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-ceac4-api-0" podUID="6d97eda7-c4c6-42f3-bb49-12824d113ea9" containerName="cinder-api" probeResult="failure" output="Get \"https://10.128.0.229:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:14:19.196646 master-0 kubenswrapper[33013]: I0313 11:14:19.196600 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 13 11:14:19.700794 master-0 kubenswrapper[33013]: I0313 11:14:19.695135 33013 generic.go:334] "Generic (PLEG): container finished" podID="3630054b-4002-4cdc-b667-ad4cece7b207" containerID="fcc5a610adc6d0d7a7fca8489f5839daa731e26ad101cd880b65e8a404682502" exitCode=143 Mar 13 11:14:19.700794 master-0 kubenswrapper[33013]: I0313 11:14:19.695220 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-686c8b6b46-vlmv5" event={"ID":"3630054b-4002-4cdc-b667-ad4cece7b207","Type":"ContainerDied","Data":"fcc5a610adc6d0d7a7fca8489f5839daa731e26ad101cd880b65e8a404682502"} Mar 13 11:14:19.701651 master-0 kubenswrapper[33013]: I0313 11:14:19.701574 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5987cf94cc-zcxvf" event={"ID":"6bd500aa-4d7d-4372-a7b7-7ef74c7bca9c","Type":"ContainerStarted","Data":"80b358abed66ebc723d5b8af4ba1114fc8d6ebc25fefee9f70d031e8a03c6202"} Mar 13 11:14:19.701732 master-0 kubenswrapper[33013]: I0313 11:14:19.701704 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:19.708606 master-0 kubenswrapper[33013]: I0313 11:14:19.708538 33013 generic.go:334] "Generic (PLEG): container finished" podID="e5137299-9cd3-46e0-9689-4416b06029db" containerID="a5e2bfc5e6a076e2ec2afcf0d059532ea53005041d4e5c7f2d32740bd0be3c66" exitCode=0 Mar 13 11:14:19.708759 master-0 kubenswrapper[33013]: I0313 11:14:19.708667 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" event={"ID":"e5137299-9cd3-46e0-9689-4416b06029db","Type":"ContainerDied","Data":"a5e2bfc5e6a076e2ec2afcf0d059532ea53005041d4e5c7f2d32740bd0be3c66"} Mar 13 11:14:19.708803 master-0 kubenswrapper[33013]: I0313 11:14:19.708753 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" event={"ID":"e5137299-9cd3-46e0-9689-4416b06029db","Type":"ContainerDied","Data":"16c9a98d827fcc239efb3dd3a2c7cefdeb444a0b955c68042909628fb499a2d4"} Mar 13 11:14:19.708803 master-0 kubenswrapper[33013]: I0313 11:14:19.708775 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16c9a98d827fcc239efb3dd3a2c7cefdeb444a0b955c68042909628fb499a2d4" Mar 13 11:14:19.717432 master-0 kubenswrapper[33013]: I0313 11:14:19.717369 33013 generic.go:334] "Generic (PLEG): container finished" podID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerID="f455deb8a2fd5ae2c7a38c794095f7506f9328ddba80c2a522a2d9b4940c1bf1" exitCode=1 Mar 13 11:14:19.719084 master-0 kubenswrapper[33013]: I0313 11:14:19.717561 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7657b6885c-5c572" event={"ID":"0341ff60-4819-448f-98f7-4ee8216d5d39","Type":"ContainerDied","Data":"f455deb8a2fd5ae2c7a38c794095f7506f9328ddba80c2a522a2d9b4940c1bf1"} Mar 13 11:14:19.720162 master-0 kubenswrapper[33013]: I0313 11:14:19.719638 33013 scope.go:117] "RemoveContainer" containerID="f455deb8a2fd5ae2c7a38c794095f7506f9328ddba80c2a522a2d9b4940c1bf1" Mar 13 11:14:19.735222 master-0 kubenswrapper[33013]: I0313 11:14:19.729008 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" event={"ID":"063dc6df-f168-42ac-835b-1002d40f6d55","Type":"ContainerStarted","Data":"eb193bef47599e99ac08a3893a977aabfb3c9956d07b9e9a5f762fb318a0203e"} Mar 13 11:14:19.896603 master-0 kubenswrapper[33013]: I0313 11:14:19.894047 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 13 11:14:19.936684 master-0 kubenswrapper[33013]: I0313 11:14:19.935565 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-5987cf94cc-zcxvf" podStartSLOduration=8.935277568 podStartE2EDuration="8.935277568s" podCreationTimestamp="2026-03-13 11:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:14:19.866195421 +0000 UTC m=+1043.342148770" watchObservedRunningTime="2026-03-13 11:14:19.935277568 +0000 UTC m=+1043.411230917" Mar 13 11:14:20.065754 master-0 kubenswrapper[33013]: I0313 11:14:20.065692 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:14:20.169483 master-0 kubenswrapper[33013]: I0313 11:14:20.169406 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-ovsdbserver-nb\") pod \"e5137299-9cd3-46e0-9689-4416b06029db\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " Mar 13 11:14:20.169730 master-0 kubenswrapper[33013]: I0313 11:14:20.169505 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-dns-swift-storage-0\") pod \"e5137299-9cd3-46e0-9689-4416b06029db\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " Mar 13 11:14:20.169730 master-0 kubenswrapper[33013]: I0313 11:14:20.169568 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-config\") pod \"e5137299-9cd3-46e0-9689-4416b06029db\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " Mar 13 11:14:20.169730 master-0 kubenswrapper[33013]: I0313 11:14:20.169724 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-ovsdbserver-sb\") pod \"e5137299-9cd3-46e0-9689-4416b06029db\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " Mar 13 11:14:20.169965 master-0 kubenswrapper[33013]: I0313 11:14:20.169940 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-dns-svc\") pod \"e5137299-9cd3-46e0-9689-4416b06029db\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " Mar 13 11:14:20.170028 master-0 kubenswrapper[33013]: I0313 11:14:20.169997 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdz6r\" (UniqueName: \"kubernetes.io/projected/e5137299-9cd3-46e0-9689-4416b06029db-kube-api-access-fdz6r\") pod \"e5137299-9cd3-46e0-9689-4416b06029db\" (UID: \"e5137299-9cd3-46e0-9689-4416b06029db\") " Mar 13 11:14:20.200380 master-0 kubenswrapper[33013]: I0313 11:14:20.200283 33013 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-ceac4-api-0" podUID="6d97eda7-c4c6-42f3-bb49-12824d113ea9" containerName="cinder-api" probeResult="failure" output="Get \"https://10.128.0.229:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:14:20.209287 master-0 kubenswrapper[33013]: I0313 11:14:20.209183 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5137299-9cd3-46e0-9689-4416b06029db-kube-api-access-fdz6r" (OuterVolumeSpecName: "kube-api-access-fdz6r") pod "e5137299-9cd3-46e0-9689-4416b06029db" (UID: "e5137299-9cd3-46e0-9689-4416b06029db"). InnerVolumeSpecName "kube-api-access-fdz6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:20.272930 master-0 kubenswrapper[33013]: I0313 11:14:20.272868 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdz6r\" (UniqueName: \"kubernetes.io/projected/e5137299-9cd3-46e0-9689-4416b06029db-kube-api-access-fdz6r\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:20.421752 master-0 kubenswrapper[33013]: I0313 11:14:20.415370 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e5137299-9cd3-46e0-9689-4416b06029db" (UID: "e5137299-9cd3-46e0-9689-4416b06029db"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:20.458618 master-0 kubenswrapper[33013]: I0313 11:14:20.458335 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e5137299-9cd3-46e0-9689-4416b06029db" (UID: "e5137299-9cd3-46e0-9689-4416b06029db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:20.481682 master-0 kubenswrapper[33013]: I0313 11:14:20.481315 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:20.481682 master-0 kubenswrapper[33013]: I0313 11:14:20.481378 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:20.581448 master-0 kubenswrapper[33013]: I0313 11:14:20.581248 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e5137299-9cd3-46e0-9689-4416b06029db" (UID: "e5137299-9cd3-46e0-9689-4416b06029db"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:20.585244 master-0 kubenswrapper[33013]: I0313 11:14:20.585189 33013 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:20.634749 master-0 kubenswrapper[33013]: I0313 11:14:20.631213 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e5137299-9cd3-46e0-9689-4416b06029db" (UID: "e5137299-9cd3-46e0-9689-4416b06029db"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:20.691488 master-0 kubenswrapper[33013]: I0313 11:14:20.689491 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:20.713070 master-0 kubenswrapper[33013]: I0313 11:14:20.710905 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-config" (OuterVolumeSpecName: "config") pod "e5137299-9cd3-46e0-9689-4416b06029db" (UID: "e5137299-9cd3-46e0-9689-4416b06029db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:20.792086 master-0 kubenswrapper[33013]: I0313 11:14:20.792011 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" event={"ID":"063dc6df-f168-42ac-835b-1002d40f6d55","Type":"ContainerStarted","Data":"613242b27fd32cda77d571851be61f6a87d4e4baf499019bffd509f51590b570"} Mar 13 11:14:20.794623 master-0 kubenswrapper[33013]: I0313 11:14:20.794554 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5137299-9cd3-46e0-9689-4416b06029db-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:20.810019 master-0 kubenswrapper[33013]: I0313 11:14:20.809962 33013 generic.go:334] "Generic (PLEG): container finished" podID="e16baf7d-8440-4431-a184-523ae34f6e6f" containerID="4716c1978bba1d73df9a4cb2e6c28cdd15ac64a07b384a240650da50f863d580" exitCode=0 Mar 13 11:14:20.810250 master-0 kubenswrapper[33013]: I0313 11:14:20.810051 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e16baf7d-8440-4431-a184-523ae34f6e6f","Type":"ContainerDied","Data":"4716c1978bba1d73df9a4cb2e6c28cdd15ac64a07b384a240650da50f863d580"} Mar 13 11:14:20.830131 master-0 kubenswrapper[33013]: I0313 11:14:20.828727 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-scheduler-0" event={"ID":"dbfdad2d-8dce-47a6-9981-b7d7b984db88","Type":"ContainerStarted","Data":"be2f1e4b9d9f1d39725be65582c628ba62ba75c6f3ead738e178c9eee5b4619a"} Mar 13 11:14:20.840376 master-0 kubenswrapper[33013]: I0313 11:14:20.840237 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-backup-0" event={"ID":"3c3dc21b-6425-45c1-a7f8-56336972b4ca","Type":"ContainerStarted","Data":"db525c7b57a9dbe69ba1edd9ad9687b458f45670ed4876d5e476fd80d0caabac"} Mar 13 11:14:20.862904 master-0 kubenswrapper[33013]: I0313 11:14:20.862428 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7657b6885c-5c572" event={"ID":"0341ff60-4819-448f-98f7-4ee8216d5d39","Type":"ContainerStarted","Data":"e67f5bd85896c51d4994dea3411f9ed42ea5cd21ebd9ed42af3b5df24f197d91"} Mar 13 11:14:20.863732 master-0 kubenswrapper[33013]: I0313 11:14:20.863690 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:20.875171 master-0 kubenswrapper[33013]: I0313 11:14:20.875126 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc5fdb9b9-vmzb7" Mar 13 11:14:20.875523 master-0 kubenswrapper[33013]: I0313 11:14:20.875472 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"62dc5405-5c84-43a7-9b0d-400716bf7ab4","Type":"ContainerStarted","Data":"762590ee8ecd6bd98ad52efdf81aebcbe45e5dbea27cb101ec12e1b3bf0d2041"} Mar 13 11:14:20.930628 master-0 kubenswrapper[33013]: I0313 11:14:20.930506 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ceac4-scheduler-0" podStartSLOduration=8.930479385 podStartE2EDuration="8.930479385s" podCreationTimestamp="2026-03-13 11:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:14:20.905753897 +0000 UTC m=+1044.381707246" watchObservedRunningTime="2026-03-13 11:14:20.930479385 +0000 UTC m=+1044.406432734" Mar 13 11:14:21.073622 master-0 kubenswrapper[33013]: I0313 11:14:21.073528 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ceac4-backup-0" podStartSLOduration=6.073501187 podStartE2EDuration="6.073501187s" podCreationTimestamp="2026-03-13 11:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:14:20.972319842 +0000 UTC m=+1044.448273191" watchObservedRunningTime="2026-03-13 11:14:21.073501187 +0000 UTC m=+1044.549454536" Mar 13 11:14:21.156677 master-0 kubenswrapper[33013]: I0313 11:14:21.156323 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc5fdb9b9-vmzb7"] Mar 13 11:14:21.184271 master-0 kubenswrapper[33013]: I0313 11:14:21.184092 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc5fdb9b9-vmzb7"] Mar 13 11:14:21.464249 master-0 kubenswrapper[33013]: I0313 11:14:21.462702 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:21.516681 master-0 kubenswrapper[33013]: I0313 11:14:21.516093 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:14:21.915923 master-0 kubenswrapper[33013]: I0313 11:14:21.915848 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" event={"ID":"063dc6df-f168-42ac-835b-1002d40f6d55","Type":"ContainerStarted","Data":"34e12b5eafad8d675a88da25a2b2d1a19151da0b8c8f7f3e858730c6a4097c85"} Mar 13 11:14:21.944292 master-0 kubenswrapper[33013]: I0313 11:14:21.944225 33013 generic.go:334] "Generic (PLEG): container finished" podID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerID="e67f5bd85896c51d4994dea3411f9ed42ea5cd21ebd9ed42af3b5df24f197d91" exitCode=1 Mar 13 11:14:21.946238 master-0 kubenswrapper[33013]: I0313 11:14:21.946202 33013 scope.go:117] "RemoveContainer" containerID="e67f5bd85896c51d4994dea3411f9ed42ea5cd21ebd9ed42af3b5df24f197d91" Mar 13 11:14:21.946677 master-0 kubenswrapper[33013]: E0313 11:14:21.946642 33013 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-7657b6885c-5c572_openstack(0341ff60-4819-448f-98f7-4ee8216d5d39)\"" pod="openstack/ironic-7657b6885c-5c572" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" Mar 13 11:14:21.947100 master-0 kubenswrapper[33013]: I0313 11:14:21.946990 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7657b6885c-5c572" event={"ID":"0341ff60-4819-448f-98f7-4ee8216d5d39","Type":"ContainerDied","Data":"e67f5bd85896c51d4994dea3411f9ed42ea5cd21ebd9ed42af3b5df24f197d91"} Mar 13 11:14:21.947100 master-0 kubenswrapper[33013]: I0313 11:14:21.947067 33013 scope.go:117] "RemoveContainer" containerID="f455deb8a2fd5ae2c7a38c794095f7506f9328ddba80c2a522a2d9b4940c1bf1" Mar 13 11:14:21.979093 master-0 kubenswrapper[33013]: I0313 11:14:21.977940 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" podStartSLOduration=5.977912045 podStartE2EDuration="5.977912045s" podCreationTimestamp="2026-03-13 11:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:14:21.969772352 +0000 UTC m=+1045.445725701" watchObservedRunningTime="2026-03-13 11:14:21.977912045 +0000 UTC m=+1045.453865394" Mar 13 11:14:22.300609 master-0 kubenswrapper[33013]: I0313 11:14:22.299822 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-fxhxg"] Mar 13 11:14:22.300609 master-0 kubenswrapper[33013]: E0313 11:14:22.300486 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5137299-9cd3-46e0-9689-4416b06029db" containerName="init" Mar 13 11:14:22.300609 master-0 kubenswrapper[33013]: I0313 11:14:22.300524 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5137299-9cd3-46e0-9689-4416b06029db" containerName="init" Mar 13 11:14:22.300609 master-0 kubenswrapper[33013]: E0313 11:14:22.300607 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5137299-9cd3-46e0-9689-4416b06029db" containerName="dnsmasq-dns" Mar 13 11:14:22.300609 master-0 kubenswrapper[33013]: I0313 11:14:22.300618 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5137299-9cd3-46e0-9689-4416b06029db" containerName="dnsmasq-dns" Mar 13 11:14:22.301026 master-0 kubenswrapper[33013]: I0313 11:14:22.300858 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5137299-9cd3-46e0-9689-4416b06029db" containerName="dnsmasq-dns" Mar 13 11:14:22.308761 master-0 kubenswrapper[33013]: I0313 11:14:22.301747 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.313960 master-0 kubenswrapper[33013]: I0313 11:14:22.313893 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 13 11:14:22.314254 master-0 kubenswrapper[33013]: I0313 11:14:22.314206 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 13 11:14:22.321138 master-0 kubenswrapper[33013]: I0313 11:14:22.321061 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-fxhxg"] Mar 13 11:14:22.480703 master-0 kubenswrapper[33013]: I0313 11:14:22.477237 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-combined-ca-bundle\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.480703 master-0 kubenswrapper[33013]: I0313 11:14:22.477322 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/68f912cc-d199-4a01-bec5-765cc17824bb-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.480703 master-0 kubenswrapper[33013]: I0313 11:14:22.477386 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-scripts\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.480703 master-0 kubenswrapper[33013]: I0313 11:14:22.477408 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-config\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.480703 master-0 kubenswrapper[33013]: I0313 11:14:22.477441 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/68f912cc-d199-4a01-bec5-765cc17824bb-var-lib-ironic\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.480703 master-0 kubenswrapper[33013]: I0313 11:14:22.477495 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxp9q\" (UniqueName: \"kubernetes.io/projected/68f912cc-d199-4a01-bec5-765cc17824bb-kube-api-access-lxp9q\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.480703 master-0 kubenswrapper[33013]: I0313 11:14:22.477565 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/68f912cc-d199-4a01-bec5-765cc17824bb-etc-podinfo\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.581492 master-0 kubenswrapper[33013]: I0313 11:14:22.581411 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-scripts\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.581798 master-0 kubenswrapper[33013]: I0313 11:14:22.581506 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-config\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.581798 master-0 kubenswrapper[33013]: I0313 11:14:22.581572 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/68f912cc-d199-4a01-bec5-765cc17824bb-var-lib-ironic\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.581798 master-0 kubenswrapper[33013]: I0313 11:14:22.581699 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxp9q\" (UniqueName: \"kubernetes.io/projected/68f912cc-d199-4a01-bec5-765cc17824bb-kube-api-access-lxp9q\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.581925 master-0 kubenswrapper[33013]: I0313 11:14:22.581813 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/68f912cc-d199-4a01-bec5-765cc17824bb-etc-podinfo\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.581925 master-0 kubenswrapper[33013]: I0313 11:14:22.581890 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-combined-ca-bundle\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.581989 master-0 kubenswrapper[33013]: I0313 11:14:22.581943 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/68f912cc-d199-4a01-bec5-765cc17824bb-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.582571 master-0 kubenswrapper[33013]: I0313 11:14:22.582529 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/68f912cc-d199-4a01-bec5-765cc17824bb-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.586170 master-0 kubenswrapper[33013]: I0313 11:14:22.586118 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/68f912cc-d199-4a01-bec5-765cc17824bb-var-lib-ironic\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.599613 master-0 kubenswrapper[33013]: I0313 11:14:22.597638 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-scripts\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.601274 master-0 kubenswrapper[33013]: I0313 11:14:22.601196 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/68f912cc-d199-4a01-bec5-765cc17824bb-etc-podinfo\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.612683 master-0 kubenswrapper[33013]: I0313 11:14:22.609201 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-combined-ca-bundle\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.615579 master-0 kubenswrapper[33013]: I0313 11:14:22.613279 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxp9q\" (UniqueName: \"kubernetes.io/projected/68f912cc-d199-4a01-bec5-765cc17824bb-kube-api-access-lxp9q\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.629616 master-0 kubenswrapper[33013]: I0313 11:14:22.628474 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-config\") pod \"ironic-inspector-db-sync-fxhxg\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.721623 master-0 kubenswrapper[33013]: I0313 11:14:22.721565 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:22.750196 master-0 kubenswrapper[33013]: I0313 11:14:22.745226 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5137299-9cd3-46e0-9689-4416b06029db" path="/var/lib/kubelet/pods/e5137299-9cd3-46e0-9689-4416b06029db/volumes" Mar 13 11:14:22.750196 master-0 kubenswrapper[33013]: I0313 11:14:22.745983 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:22.919344 master-0 kubenswrapper[33013]: I0313 11:14:22.917819 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:22.968666 master-0 kubenswrapper[33013]: I0313 11:14:22.968615 33013 generic.go:334] "Generic (PLEG): container finished" podID="3630054b-4002-4cdc-b667-ad4cece7b207" containerID="881f2d9d76ce2aa5c00a7f7ec95f29b43793e7ac713363e8fbfb81ad7dfd2f40" exitCode=0 Mar 13 11:14:22.968865 master-0 kubenswrapper[33013]: I0313 11:14:22.968685 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-686c8b6b46-vlmv5" event={"ID":"3630054b-4002-4cdc-b667-ad4cece7b207","Type":"ContainerDied","Data":"881f2d9d76ce2aa5c00a7f7ec95f29b43793e7ac713363e8fbfb81ad7dfd2f40"} Mar 13 11:14:22.978924 master-0 kubenswrapper[33013]: I0313 11:14:22.976512 33013 scope.go:117] "RemoveContainer" containerID="e67f5bd85896c51d4994dea3411f9ed42ea5cd21ebd9ed42af3b5df24f197d91" Mar 13 11:14:22.978924 master-0 kubenswrapper[33013]: E0313 11:14:22.976799 33013 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-7657b6885c-5c572_openstack(0341ff60-4819-448f-98f7-4ee8216d5d39)\"" pod="openstack/ironic-7657b6885c-5c572" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" Mar 13 11:14:23.084329 master-0 kubenswrapper[33013]: I0313 11:14:23.084191 33013 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:23.380691 master-0 kubenswrapper[33013]: I0313 11:14:23.380646 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:14:23.498106 master-0 kubenswrapper[33013]: W0313 11:14:23.495727 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68f912cc_d199_4a01_bec5_765cc17824bb.slice/crio-346e758bd2e97e34002f75524fb36af7a92c624f29271fb277647af25327e53b WatchSource:0}: Error finding container 346e758bd2e97e34002f75524fb36af7a92c624f29271fb277647af25327e53b: Status 404 returned error can't find the container with id 346e758bd2e97e34002f75524fb36af7a92c624f29271fb277647af25327e53b Mar 13 11:14:23.513352 master-0 kubenswrapper[33013]: I0313 11:14:23.513311 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-ceac4-scheduler-0" Mar 13 11:14:23.526254 master-0 kubenswrapper[33013]: I0313 11:14:23.525762 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3630054b-4002-4cdc-b667-ad4cece7b207-logs\") pod \"3630054b-4002-4cdc-b667-ad4cece7b207\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " Mar 13 11:14:23.526254 master-0 kubenswrapper[33013]: I0313 11:14:23.525824 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-public-tls-certs\") pod \"3630054b-4002-4cdc-b667-ad4cece7b207\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " Mar 13 11:14:23.526254 master-0 kubenswrapper[33013]: I0313 11:14:23.525984 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr6h2\" (UniqueName: \"kubernetes.io/projected/3630054b-4002-4cdc-b667-ad4cece7b207-kube-api-access-wr6h2\") pod \"3630054b-4002-4cdc-b667-ad4cece7b207\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " Mar 13 11:14:23.526254 master-0 kubenswrapper[33013]: I0313 11:14:23.526103 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-internal-tls-certs\") pod \"3630054b-4002-4cdc-b667-ad4cece7b207\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " Mar 13 11:14:23.526891 master-0 kubenswrapper[33013]: I0313 11:14:23.526289 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-config-data\") pod \"3630054b-4002-4cdc-b667-ad4cece7b207\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " Mar 13 11:14:23.526891 master-0 kubenswrapper[33013]: I0313 11:14:23.526374 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-scripts\") pod \"3630054b-4002-4cdc-b667-ad4cece7b207\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " Mar 13 11:14:23.529748 master-0 kubenswrapper[33013]: I0313 11:14:23.529705 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-combined-ca-bundle\") pod \"3630054b-4002-4cdc-b667-ad4cece7b207\" (UID: \"3630054b-4002-4cdc-b667-ad4cece7b207\") " Mar 13 11:14:23.530601 master-0 kubenswrapper[33013]: I0313 11:14:23.530503 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3630054b-4002-4cdc-b667-ad4cece7b207-logs" (OuterVolumeSpecName: "logs") pod "3630054b-4002-4cdc-b667-ad4cece7b207" (UID: "3630054b-4002-4cdc-b667-ad4cece7b207"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:14:23.531125 master-0 kubenswrapper[33013]: I0313 11:14:23.531096 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3630054b-4002-4cdc-b667-ad4cece7b207-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:23.538799 master-0 kubenswrapper[33013]: I0313 11:14:23.538570 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-scripts" (OuterVolumeSpecName: "scripts") pod "3630054b-4002-4cdc-b667-ad4cece7b207" (UID: "3630054b-4002-4cdc-b667-ad4cece7b207"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:23.546937 master-0 kubenswrapper[33013]: I0313 11:14:23.546726 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3630054b-4002-4cdc-b667-ad4cece7b207-kube-api-access-wr6h2" (OuterVolumeSpecName: "kube-api-access-wr6h2") pod "3630054b-4002-4cdc-b667-ad4cece7b207" (UID: "3630054b-4002-4cdc-b667-ad4cece7b207"). InnerVolumeSpecName "kube-api-access-wr6h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:23.558128 master-0 kubenswrapper[33013]: I0313 11:14:23.558048 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-fxhxg"] Mar 13 11:14:23.595093 master-0 kubenswrapper[33013]: I0313 11:14:23.593531 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-config-data" (OuterVolumeSpecName: "config-data") pod "3630054b-4002-4cdc-b667-ad4cece7b207" (UID: "3630054b-4002-4cdc-b667-ad4cece7b207"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:23.634029 master-0 kubenswrapper[33013]: I0313 11:14:23.629114 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3630054b-4002-4cdc-b667-ad4cece7b207" (UID: "3630054b-4002-4cdc-b667-ad4cece7b207"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:23.638614 master-0 kubenswrapper[33013]: I0313 11:14:23.636706 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-ceac4-api-0" Mar 13 11:14:23.681642 master-0 kubenswrapper[33013]: I0313 11:14:23.678034 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:23.681642 master-0 kubenswrapper[33013]: I0313 11:14:23.678095 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:23.723648 master-0 kubenswrapper[33013]: I0313 11:14:23.721708 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3630054b-4002-4cdc-b667-ad4cece7b207" (UID: "3630054b-4002-4cdc-b667-ad4cece7b207"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:23.772922 master-0 kubenswrapper[33013]: I0313 11:14:23.766312 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:23.772922 master-0 kubenswrapper[33013]: I0313 11:14:23.766363 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr6h2\" (UniqueName: \"kubernetes.io/projected/3630054b-4002-4cdc-b667-ad4cece7b207-kube-api-access-wr6h2\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:23.808009 master-0 kubenswrapper[33013]: I0313 11:14:23.807932 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3630054b-4002-4cdc-b667-ad4cece7b207" (UID: "3630054b-4002-4cdc-b667-ad4cece7b207"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:23.880762 master-0 kubenswrapper[33013]: I0313 11:14:23.876299 33013 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:23.880762 master-0 kubenswrapper[33013]: I0313 11:14:23.876370 33013 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3630054b-4002-4cdc-b667-ad4cece7b207-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:23.919737 master-0 kubenswrapper[33013]: I0313 11:14:23.917120 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-c5777cc85-7p8mx" Mar 13 11:14:24.068566 master-0 kubenswrapper[33013]: I0313 11:14:24.062680 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-599ddd56fb-m48bv"] Mar 13 11:14:24.068566 master-0 kubenswrapper[33013]: I0313 11:14:24.063037 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-599ddd56fb-m48bv" podUID="142b9e51-cb04-42ce-b6f5-b0554d9585a2" containerName="neutron-api" containerID="cri-o://1493f263662fa53e88cc5d65bedcd38bf9d16f06a98ec33e3cf9ff571b46d841" gracePeriod=30 Mar 13 11:14:24.068566 master-0 kubenswrapper[33013]: I0313 11:14:24.065876 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-599ddd56fb-m48bv" podUID="142b9e51-cb04-42ce-b6f5-b0554d9585a2" containerName="neutron-httpd" containerID="cri-o://74c6758cb83b178f150a2428138bc6dd87b3665d5f497b6843db0a0dcb9d6546" gracePeriod=30 Mar 13 11:14:24.104685 master-0 kubenswrapper[33013]: I0313 11:14:24.100554 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-686c8b6b46-vlmv5" event={"ID":"3630054b-4002-4cdc-b667-ad4cece7b207","Type":"ContainerDied","Data":"658e312ea5140366337197ba2dc803f72f06c2836e27def07efa2bc9837d0ccf"} Mar 13 11:14:24.104685 master-0 kubenswrapper[33013]: I0313 11:14:24.100651 33013 scope.go:117] "RemoveContainer" containerID="881f2d9d76ce2aa5c00a7f7ec95f29b43793e7ac713363e8fbfb81ad7dfd2f40" Mar 13 11:14:24.104685 master-0 kubenswrapper[33013]: I0313 11:14:24.100860 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-686c8b6b46-vlmv5" Mar 13 11:14:24.138352 master-0 kubenswrapper[33013]: I0313 11:14:24.138174 33013 generic.go:334] "Generic (PLEG): container finished" podID="3b99f02d-f8e2-497b-b68b-8e445e7b7541" containerID="d28239b3ecad677258f0a8ab2dcbfcbf225e5fc929b3f5b896bdf17cf2238051" exitCode=1 Mar 13 11:14:24.138352 master-0 kubenswrapper[33013]: I0313 11:14:24.138249 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" event={"ID":"3b99f02d-f8e2-497b-b68b-8e445e7b7541","Type":"ContainerDied","Data":"d28239b3ecad677258f0a8ab2dcbfcbf225e5fc929b3f5b896bdf17cf2238051"} Mar 13 11:14:24.140369 master-0 kubenswrapper[33013]: I0313 11:14:24.139357 33013 scope.go:117] "RemoveContainer" containerID="d28239b3ecad677258f0a8ab2dcbfcbf225e5fc929b3f5b896bdf17cf2238051" Mar 13 11:14:24.141672 master-0 kubenswrapper[33013]: I0313 11:14:24.140874 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-fxhxg" event={"ID":"68f912cc-d199-4a01-bec5-765cc17824bb","Type":"ContainerStarted","Data":"346e758bd2e97e34002f75524fb36af7a92c624f29271fb277647af25327e53b"} Mar 13 11:14:24.141672 master-0 kubenswrapper[33013]: I0313 11:14:24.141535 33013 scope.go:117] "RemoveContainer" containerID="e67f5bd85896c51d4994dea3411f9ed42ea5cd21ebd9ed42af3b5df24f197d91" Mar 13 11:14:24.144992 master-0 kubenswrapper[33013]: E0313 11:14:24.142014 33013 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-7657b6885c-5c572_openstack(0341ff60-4819-448f-98f7-4ee8216d5d39)\"" pod="openstack/ironic-7657b6885c-5c572" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" Mar 13 11:14:24.295942 master-0 kubenswrapper[33013]: I0313 11:14:24.295867 33013 scope.go:117] "RemoveContainer" containerID="fcc5a610adc6d0d7a7fca8489f5839daa731e26ad101cd880b65e8a404682502" Mar 13 11:14:24.414134 master-0 kubenswrapper[33013]: I0313 11:14:24.414063 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-686c8b6b46-vlmv5"] Mar 13 11:14:24.418434 master-0 kubenswrapper[33013]: E0313 11:14:24.418282 33013 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3630054b_4002_4cdc_b667_ad4cece7b207.slice/crio-658e312ea5140366337197ba2dc803f72f06c2836e27def07efa2bc9837d0ccf\": RecentStats: unable to find data in memory cache]" Mar 13 11:14:24.468051 master-0 kubenswrapper[33013]: I0313 11:14:24.467855 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-686c8b6b46-vlmv5"] Mar 13 11:14:24.733867 master-0 kubenswrapper[33013]: I0313 11:14:24.733030 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3630054b-4002-4cdc-b667-ad4cece7b207" path="/var/lib/kubelet/pods/3630054b-4002-4cdc-b667-ad4cece7b207/volumes" Mar 13 11:14:25.165883 master-0 kubenswrapper[33013]: I0313 11:14:25.165829 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" event={"ID":"3b99f02d-f8e2-497b-b68b-8e445e7b7541","Type":"ContainerStarted","Data":"2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef"} Mar 13 11:14:25.167086 master-0 kubenswrapper[33013]: I0313 11:14:25.166225 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:25.173174 master-0 kubenswrapper[33013]: I0313 11:14:25.173137 33013 generic.go:334] "Generic (PLEG): container finished" podID="142b9e51-cb04-42ce-b6f5-b0554d9585a2" containerID="74c6758cb83b178f150a2428138bc6dd87b3665d5f497b6843db0a0dcb9d6546" exitCode=0 Mar 13 11:14:25.173370 master-0 kubenswrapper[33013]: I0313 11:14:25.173352 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599ddd56fb-m48bv" event={"ID":"142b9e51-cb04-42ce-b6f5-b0554d9585a2","Type":"ContainerDied","Data":"74c6758cb83b178f150a2428138bc6dd87b3665d5f497b6843db0a0dcb9d6546"} Mar 13 11:14:25.176996 master-0 kubenswrapper[33013]: I0313 11:14:25.176973 33013 scope.go:117] "RemoveContainer" containerID="e67f5bd85896c51d4994dea3411f9ed42ea5cd21ebd9ed42af3b5df24f197d91" Mar 13 11:14:25.180620 master-0 kubenswrapper[33013]: E0313 11:14:25.177341 33013 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-7657b6885c-5c572_openstack(0341ff60-4819-448f-98f7-4ee8216d5d39)\"" pod="openstack/ironic-7657b6885c-5c572" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" Mar 13 11:14:25.293203 master-0 kubenswrapper[33013]: I0313 11:14:25.292263 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-5987cf94cc-zcxvf" Mar 13 11:14:25.485701 master-0 kubenswrapper[33013]: I0313 11:14:25.481829 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-7657b6885c-5c572"] Mar 13 11:14:26.221000 master-0 kubenswrapper[33013]: I0313 11:14:26.220887 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-7657b6885c-5c572" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerName="ironic-api-log" containerID="cri-o://67449c2217974697072bb81c294df0ce3c706dedfcc619e89cdbce71bd4a721f" gracePeriod=60 Mar 13 11:14:26.294637 master-0 kubenswrapper[33013]: I0313 11:14:26.294281 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-c8594cc5d-xblpj"] Mar 13 11:14:26.295047 master-0 kubenswrapper[33013]: E0313 11:14:26.294990 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3630054b-4002-4cdc-b667-ad4cece7b207" containerName="placement-log" Mar 13 11:14:26.295047 master-0 kubenswrapper[33013]: I0313 11:14:26.295030 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="3630054b-4002-4cdc-b667-ad4cece7b207" containerName="placement-log" Mar 13 11:14:26.295170 master-0 kubenswrapper[33013]: E0313 11:14:26.295132 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3630054b-4002-4cdc-b667-ad4cece7b207" containerName="placement-api" Mar 13 11:14:26.295170 master-0 kubenswrapper[33013]: I0313 11:14:26.295143 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="3630054b-4002-4cdc-b667-ad4cece7b207" containerName="placement-api" Mar 13 11:14:26.295664 master-0 kubenswrapper[33013]: I0313 11:14:26.295630 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="3630054b-4002-4cdc-b667-ad4cece7b207" containerName="placement-log" Mar 13 11:14:26.295664 master-0 kubenswrapper[33013]: I0313 11:14:26.295654 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="3630054b-4002-4cdc-b667-ad4cece7b207" containerName="placement-api" Mar 13 11:14:26.298905 master-0 kubenswrapper[33013]: I0313 11:14:26.297623 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.319917 master-0 kubenswrapper[33013]: I0313 11:14:26.316493 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 13 11:14:26.319917 master-0 kubenswrapper[33013]: I0313 11:14:26.316720 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 13 11:14:26.319917 master-0 kubenswrapper[33013]: I0313 11:14:26.316830 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 13 11:14:26.420000 master-0 kubenswrapper[33013]: I0313 11:14:26.419657 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-c8594cc5d-xblpj"] Mar 13 11:14:26.421494 master-0 kubenswrapper[33013]: I0313 11:14:26.421430 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-log-httpd\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.421713 master-0 kubenswrapper[33013]: I0313 11:14:26.421670 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-run-httpd\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.422428 master-0 kubenswrapper[33013]: I0313 11:14:26.421867 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47wz5\" (UniqueName: \"kubernetes.io/projected/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-kube-api-access-47wz5\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.422428 master-0 kubenswrapper[33013]: I0313 11:14:26.421977 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-etc-swift\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.422428 master-0 kubenswrapper[33013]: I0313 11:14:26.422022 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-public-tls-certs\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.422428 master-0 kubenswrapper[33013]: I0313 11:14:26.422134 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-config-data\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.422428 master-0 kubenswrapper[33013]: I0313 11:14:26.422160 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-combined-ca-bundle\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.422428 master-0 kubenswrapper[33013]: I0313 11:14:26.422177 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-internal-tls-certs\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.526851 master-0 kubenswrapper[33013]: I0313 11:14:26.525070 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-log-httpd\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.526851 master-0 kubenswrapper[33013]: I0313 11:14:26.525209 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-run-httpd\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.526851 master-0 kubenswrapper[33013]: I0313 11:14:26.525273 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47wz5\" (UniqueName: \"kubernetes.io/projected/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-kube-api-access-47wz5\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.526851 master-0 kubenswrapper[33013]: I0313 11:14:26.525309 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-etc-swift\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.526851 master-0 kubenswrapper[33013]: I0313 11:14:26.525332 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-public-tls-certs\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.526851 master-0 kubenswrapper[33013]: I0313 11:14:26.525375 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-config-data\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.526851 master-0 kubenswrapper[33013]: I0313 11:14:26.525395 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-combined-ca-bundle\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.526851 master-0 kubenswrapper[33013]: I0313 11:14:26.525413 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-internal-tls-certs\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.526851 master-0 kubenswrapper[33013]: I0313 11:14:26.525827 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-log-httpd\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.526851 master-0 kubenswrapper[33013]: I0313 11:14:26.526210 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-run-httpd\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.534984 master-0 kubenswrapper[33013]: I0313 11:14:26.532102 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-internal-tls-certs\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.534984 master-0 kubenswrapper[33013]: I0313 11:14:26.532220 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-combined-ca-bundle\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.534984 master-0 kubenswrapper[33013]: I0313 11:14:26.533993 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-config-data\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.536534 master-0 kubenswrapper[33013]: I0313 11:14:26.535934 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-etc-swift\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.551185 master-0 kubenswrapper[33013]: I0313 11:14:26.549099 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-public-tls-certs\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.552270 master-0 kubenswrapper[33013]: I0313 11:14:26.552214 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47wz5\" (UniqueName: \"kubernetes.io/projected/1c519b8e-ceb8-4775-b2b0-76a825ca7a9a-kube-api-access-47wz5\") pod \"swift-proxy-c8594cc5d-xblpj\" (UID: \"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a\") " pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.664727 master-0 kubenswrapper[33013]: I0313 11:14:26.664635 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:26.861701 master-0 kubenswrapper[33013]: I0313 11:14:26.861533 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-ceac4-backup-0" Mar 13 11:14:27.672649 master-0 kubenswrapper[33013]: I0313 11:14:27.665825 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:28.003649 master-0 kubenswrapper[33013]: I0313 11:14:28.003376 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-ceac4-volume-lvm-iscsi-0" Mar 13 11:14:28.345652 master-0 kubenswrapper[33013]: I0313 11:14:28.344181 33013 generic.go:334] "Generic (PLEG): container finished" podID="142b9e51-cb04-42ce-b6f5-b0554d9585a2" containerID="1493f263662fa53e88cc5d65bedcd38bf9d16f06a98ec33e3cf9ff571b46d841" exitCode=0 Mar 13 11:14:28.345652 master-0 kubenswrapper[33013]: I0313 11:14:28.344241 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599ddd56fb-m48bv" event={"ID":"142b9e51-cb04-42ce-b6f5-b0554d9585a2","Type":"ContainerDied","Data":"1493f263662fa53e88cc5d65bedcd38bf9d16f06a98ec33e3cf9ff571b46d841"} Mar 13 11:14:31.085573 master-0 kubenswrapper[33013]: I0313 11:14:31.085527 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:14:31.245857 master-0 kubenswrapper[33013]: I0313 11:14:31.241288 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-ovndb-tls-certs\") pod \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " Mar 13 11:14:31.245857 master-0 kubenswrapper[33013]: I0313 11:14:31.241473 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-config\") pod \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " Mar 13 11:14:31.245857 master-0 kubenswrapper[33013]: I0313 11:14:31.241557 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-httpd-config\") pod \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " Mar 13 11:14:31.245857 master-0 kubenswrapper[33013]: I0313 11:14:31.241594 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-combined-ca-bundle\") pod \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " Mar 13 11:14:31.245857 master-0 kubenswrapper[33013]: I0313 11:14:31.241964 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwkbh\" (UniqueName: \"kubernetes.io/projected/142b9e51-cb04-42ce-b6f5-b0554d9585a2-kube-api-access-qwkbh\") pod \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\" (UID: \"142b9e51-cb04-42ce-b6f5-b0554d9585a2\") " Mar 13 11:14:31.247335 master-0 kubenswrapper[33013]: I0313 11:14:31.246948 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "142b9e51-cb04-42ce-b6f5-b0554d9585a2" (UID: "142b9e51-cb04-42ce-b6f5-b0554d9585a2"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:31.253755 master-0 kubenswrapper[33013]: I0313 11:14:31.253720 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/142b9e51-cb04-42ce-b6f5-b0554d9585a2-kube-api-access-qwkbh" (OuterVolumeSpecName: "kube-api-access-qwkbh") pod "142b9e51-cb04-42ce-b6f5-b0554d9585a2" (UID: "142b9e51-cb04-42ce-b6f5-b0554d9585a2"). InnerVolumeSpecName "kube-api-access-qwkbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:31.353879 master-0 kubenswrapper[33013]: I0313 11:14:31.353601 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwkbh\" (UniqueName: \"kubernetes.io/projected/142b9e51-cb04-42ce-b6f5-b0554d9585a2-kube-api-access-qwkbh\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:31.353879 master-0 kubenswrapper[33013]: I0313 11:14:31.353657 33013 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-httpd-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:31.376462 master-0 kubenswrapper[33013]: I0313 11:14:31.376321 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "142b9e51-cb04-42ce-b6f5-b0554d9585a2" (UID: "142b9e51-cb04-42ce-b6f5-b0554d9585a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:31.402647 master-0 kubenswrapper[33013]: I0313 11:14:31.402280 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-config" (OuterVolumeSpecName: "config") pod "142b9e51-cb04-42ce-b6f5-b0554d9585a2" (UID: "142b9e51-cb04-42ce-b6f5-b0554d9585a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:31.415594 master-0 kubenswrapper[33013]: I0313 11:14:31.415185 33013 generic.go:334] "Generic (PLEG): container finished" podID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerID="67449c2217974697072bb81c294df0ce3c706dedfcc619e89cdbce71bd4a721f" exitCode=143 Mar 13 11:14:31.415594 master-0 kubenswrapper[33013]: I0313 11:14:31.415293 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7657b6885c-5c572" event={"ID":"0341ff60-4819-448f-98f7-4ee8216d5d39","Type":"ContainerDied","Data":"67449c2217974697072bb81c294df0ce3c706dedfcc619e89cdbce71bd4a721f"} Mar 13 11:14:31.419367 master-0 kubenswrapper[33013]: I0313 11:14:31.419287 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599ddd56fb-m48bv" event={"ID":"142b9e51-cb04-42ce-b6f5-b0554d9585a2","Type":"ContainerDied","Data":"6af1f0e61e74405088c3162374d373c94ca601a4a6aacff5fcdb66babfa11985"} Mar 13 11:14:31.419551 master-0 kubenswrapper[33013]: I0313 11:14:31.419375 33013 scope.go:117] "RemoveContainer" containerID="74c6758cb83b178f150a2428138bc6dd87b3665d5f497b6843db0a0dcb9d6546" Mar 13 11:14:31.420170 master-0 kubenswrapper[33013]: I0313 11:14:31.419608 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-599ddd56fb-m48bv" Mar 13 11:14:31.456721 master-0 kubenswrapper[33013]: I0313 11:14:31.456478 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:31.456721 master-0 kubenswrapper[33013]: I0313 11:14:31.456520 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:31.514706 master-0 kubenswrapper[33013]: I0313 11:14:31.509923 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-tjt8v"] Mar 13 11:14:31.514706 master-0 kubenswrapper[33013]: E0313 11:14:31.510486 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="142b9e51-cb04-42ce-b6f5-b0554d9585a2" containerName="neutron-httpd" Mar 13 11:14:31.514706 master-0 kubenswrapper[33013]: I0313 11:14:31.510504 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="142b9e51-cb04-42ce-b6f5-b0554d9585a2" containerName="neutron-httpd" Mar 13 11:14:31.514706 master-0 kubenswrapper[33013]: E0313 11:14:31.510514 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="142b9e51-cb04-42ce-b6f5-b0554d9585a2" containerName="neutron-api" Mar 13 11:14:31.514706 master-0 kubenswrapper[33013]: I0313 11:14:31.510521 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="142b9e51-cb04-42ce-b6f5-b0554d9585a2" containerName="neutron-api" Mar 13 11:14:31.514706 master-0 kubenswrapper[33013]: I0313 11:14:31.511236 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="142b9e51-cb04-42ce-b6f5-b0554d9585a2" containerName="neutron-httpd" Mar 13 11:14:31.514706 master-0 kubenswrapper[33013]: I0313 11:14:31.511287 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="142b9e51-cb04-42ce-b6f5-b0554d9585a2" containerName="neutron-api" Mar 13 11:14:31.514706 master-0 kubenswrapper[33013]: I0313 11:14:31.512084 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tjt8v" Mar 13 11:14:31.562163 master-0 kubenswrapper[33013]: I0313 11:14:31.562061 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m7c7\" (UniqueName: \"kubernetes.io/projected/5b95da59-1633-4171-a92c-e192d65465f4-kube-api-access-7m7c7\") pod \"nova-api-db-create-tjt8v\" (UID: \"5b95da59-1633-4171-a92c-e192d65465f4\") " pod="openstack/nova-api-db-create-tjt8v" Mar 13 11:14:31.562458 master-0 kubenswrapper[33013]: I0313 11:14:31.562411 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b95da59-1633-4171-a92c-e192d65465f4-operator-scripts\") pod \"nova-api-db-create-tjt8v\" (UID: \"5b95da59-1633-4171-a92c-e192d65465f4\") " pod="openstack/nova-api-db-create-tjt8v" Mar 13 11:14:31.598308 master-0 kubenswrapper[33013]: I0313 11:14:31.597842 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "142b9e51-cb04-42ce-b6f5-b0554d9585a2" (UID: "142b9e51-cb04-42ce-b6f5-b0554d9585a2"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:31.667117 master-0 kubenswrapper[33013]: I0313 11:14:31.665689 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b95da59-1633-4171-a92c-e192d65465f4-operator-scripts\") pod \"nova-api-db-create-tjt8v\" (UID: \"5b95da59-1633-4171-a92c-e192d65465f4\") " pod="openstack/nova-api-db-create-tjt8v" Mar 13 11:14:31.667117 master-0 kubenswrapper[33013]: I0313 11:14:31.665966 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m7c7\" (UniqueName: \"kubernetes.io/projected/5b95da59-1633-4171-a92c-e192d65465f4-kube-api-access-7m7c7\") pod \"nova-api-db-create-tjt8v\" (UID: \"5b95da59-1633-4171-a92c-e192d65465f4\") " pod="openstack/nova-api-db-create-tjt8v" Mar 13 11:14:31.667117 master-0 kubenswrapper[33013]: I0313 11:14:31.666167 33013 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/142b9e51-cb04-42ce-b6f5-b0554d9585a2-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:31.667117 master-0 kubenswrapper[33013]: I0313 11:14:31.666711 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b95da59-1633-4171-a92c-e192d65465f4-operator-scripts\") pod \"nova-api-db-create-tjt8v\" (UID: \"5b95da59-1633-4171-a92c-e192d65465f4\") " pod="openstack/nova-api-db-create-tjt8v" Mar 13 11:14:31.866690 master-0 kubenswrapper[33013]: I0313 11:14:31.866487 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tjt8v"] Mar 13 11:14:32.005371 master-0 kubenswrapper[33013]: I0313 11:14:31.993754 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m7c7\" (UniqueName: \"kubernetes.io/projected/5b95da59-1633-4171-a92c-e192d65465f4-kube-api-access-7m7c7\") pod \"nova-api-db-create-tjt8v\" (UID: \"5b95da59-1633-4171-a92c-e192d65465f4\") " pod="openstack/nova-api-db-create-tjt8v" Mar 13 11:14:32.093313 master-0 kubenswrapper[33013]: I0313 11:14:32.090166 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-bjft7"] Mar 13 11:14:32.093313 master-0 kubenswrapper[33013]: I0313 11:14:32.091947 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bjft7" Mar 13 11:14:32.117911 master-0 kubenswrapper[33013]: I0313 11:14:32.117803 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-599ddd56fb-m48bv"] Mar 13 11:14:32.132290 master-0 kubenswrapper[33013]: I0313 11:14:32.131835 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bjft7"] Mar 13 11:14:32.147758 master-0 kubenswrapper[33013]: I0313 11:14:32.147133 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-599ddd56fb-m48bv"] Mar 13 11:14:32.216151 master-0 kubenswrapper[33013]: I0313 11:14:32.215615 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tjt8v" Mar 13 11:14:32.233117 master-0 kubenswrapper[33013]: I0313 11:14:32.232997 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-7023-account-create-update-lfxrb"] Mar 13 11:14:32.235300 master-0 kubenswrapper[33013]: I0313 11:14:32.235271 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7023-account-create-update-lfxrb" Mar 13 11:14:32.239657 master-0 kubenswrapper[33013]: I0313 11:14:32.239619 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 13 11:14:32.245874 master-0 kubenswrapper[33013]: I0313 11:14:32.245781 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7023-account-create-update-lfxrb"] Mar 13 11:14:32.310859 master-0 kubenswrapper[33013]: I0313 11:14:32.310801 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2f57185-3e58-4df7-9593-cfaf45287839-operator-scripts\") pod \"nova-cell0-db-create-bjft7\" (UID: \"b2f57185-3e58-4df7-9593-cfaf45287839\") " pod="openstack/nova-cell0-db-create-bjft7" Mar 13 11:14:32.311660 master-0 kubenswrapper[33013]: I0313 11:14:32.311633 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swgnd\" (UniqueName: \"kubernetes.io/projected/b2f57185-3e58-4df7-9593-cfaf45287839-kube-api-access-swgnd\") pod \"nova-cell0-db-create-bjft7\" (UID: \"b2f57185-3e58-4df7-9593-cfaf45287839\") " pod="openstack/nova-cell0-db-create-bjft7" Mar 13 11:14:32.352645 master-0 kubenswrapper[33013]: I0313 11:14:32.351038 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-66z58"] Mar 13 11:14:32.353777 master-0 kubenswrapper[33013]: I0313 11:14:32.353664 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-66z58" Mar 13 11:14:32.366347 master-0 kubenswrapper[33013]: I0313 11:14:32.366271 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-66z58"] Mar 13 11:14:32.422016 master-0 kubenswrapper[33013]: I0313 11:14:32.421095 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d95c31a9-2573-4ac0-8513-5b0889aeb289-operator-scripts\") pod \"nova-cell1-db-create-66z58\" (UID: \"d95c31a9-2573-4ac0-8513-5b0889aeb289\") " pod="openstack/nova-cell1-db-create-66z58" Mar 13 11:14:32.422016 master-0 kubenswrapper[33013]: I0313 11:14:32.421257 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2f57185-3e58-4df7-9593-cfaf45287839-operator-scripts\") pod \"nova-cell0-db-create-bjft7\" (UID: \"b2f57185-3e58-4df7-9593-cfaf45287839\") " pod="openstack/nova-cell0-db-create-bjft7" Mar 13 11:14:32.422016 master-0 kubenswrapper[33013]: I0313 11:14:32.421360 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf8d6097-bf79-447e-bcea-f17f3ff4f62a-operator-scripts\") pod \"nova-api-7023-account-create-update-lfxrb\" (UID: \"cf8d6097-bf79-447e-bcea-f17f3ff4f62a\") " pod="openstack/nova-api-7023-account-create-update-lfxrb" Mar 13 11:14:32.422016 master-0 kubenswrapper[33013]: I0313 11:14:32.421470 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvrvc\" (UniqueName: \"kubernetes.io/projected/cf8d6097-bf79-447e-bcea-f17f3ff4f62a-kube-api-access-wvrvc\") pod \"nova-api-7023-account-create-update-lfxrb\" (UID: \"cf8d6097-bf79-447e-bcea-f17f3ff4f62a\") " pod="openstack/nova-api-7023-account-create-update-lfxrb" Mar 13 11:14:32.422016 master-0 kubenswrapper[33013]: I0313 11:14:32.421556 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j24fd\" (UniqueName: \"kubernetes.io/projected/d95c31a9-2573-4ac0-8513-5b0889aeb289-kube-api-access-j24fd\") pod \"nova-cell1-db-create-66z58\" (UID: \"d95c31a9-2573-4ac0-8513-5b0889aeb289\") " pod="openstack/nova-cell1-db-create-66z58" Mar 13 11:14:32.422016 master-0 kubenswrapper[33013]: I0313 11:14:32.421651 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swgnd\" (UniqueName: \"kubernetes.io/projected/b2f57185-3e58-4df7-9593-cfaf45287839-kube-api-access-swgnd\") pod \"nova-cell0-db-create-bjft7\" (UID: \"b2f57185-3e58-4df7-9593-cfaf45287839\") " pod="openstack/nova-cell0-db-create-bjft7" Mar 13 11:14:32.422941 master-0 kubenswrapper[33013]: I0313 11:14:32.422906 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2f57185-3e58-4df7-9593-cfaf45287839-operator-scripts\") pod \"nova-cell0-db-create-bjft7\" (UID: \"b2f57185-3e58-4df7-9593-cfaf45287839\") " pod="openstack/nova-cell0-db-create-bjft7" Mar 13 11:14:32.435552 master-0 kubenswrapper[33013]: I0313 11:14:32.435467 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-6ea2-account-create-update-hg4j9"] Mar 13 11:14:32.438042 master-0 kubenswrapper[33013]: I0313 11:14:32.437924 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" Mar 13 11:14:32.455675 master-0 kubenswrapper[33013]: I0313 11:14:32.452648 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6ea2-account-create-update-hg4j9"] Mar 13 11:14:32.457826 master-0 kubenswrapper[33013]: I0313 11:14:32.457797 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 13 11:14:32.462642 master-0 kubenswrapper[33013]: I0313 11:14:32.462559 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swgnd\" (UniqueName: \"kubernetes.io/projected/b2f57185-3e58-4df7-9593-cfaf45287839-kube-api-access-swgnd\") pod \"nova-cell0-db-create-bjft7\" (UID: \"b2f57185-3e58-4df7-9593-cfaf45287839\") " pod="openstack/nova-cell0-db-create-bjft7" Mar 13 11:14:32.526821 master-0 kubenswrapper[33013]: I0313 11:14:32.526688 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tlkv\" (UniqueName: \"kubernetes.io/projected/8aea14ff-34fc-4c33-940e-d438ef8f2bd9-kube-api-access-9tlkv\") pod \"nova-cell0-6ea2-account-create-update-hg4j9\" (UID: \"8aea14ff-34fc-4c33-940e-d438ef8f2bd9\") " pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" Mar 13 11:14:32.527026 master-0 kubenswrapper[33013]: I0313 11:14:32.526853 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf8d6097-bf79-447e-bcea-f17f3ff4f62a-operator-scripts\") pod \"nova-api-7023-account-create-update-lfxrb\" (UID: \"cf8d6097-bf79-447e-bcea-f17f3ff4f62a\") " pod="openstack/nova-api-7023-account-create-update-lfxrb" Mar 13 11:14:32.527026 master-0 kubenswrapper[33013]: I0313 11:14:32.526889 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvrvc\" (UniqueName: \"kubernetes.io/projected/cf8d6097-bf79-447e-bcea-f17f3ff4f62a-kube-api-access-wvrvc\") pod \"nova-api-7023-account-create-update-lfxrb\" (UID: \"cf8d6097-bf79-447e-bcea-f17f3ff4f62a\") " pod="openstack/nova-api-7023-account-create-update-lfxrb" Mar 13 11:14:32.527026 master-0 kubenswrapper[33013]: I0313 11:14:32.526922 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aea14ff-34fc-4c33-940e-d438ef8f2bd9-operator-scripts\") pod \"nova-cell0-6ea2-account-create-update-hg4j9\" (UID: \"8aea14ff-34fc-4c33-940e-d438ef8f2bd9\") " pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" Mar 13 11:14:32.527026 master-0 kubenswrapper[33013]: I0313 11:14:32.526947 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j24fd\" (UniqueName: \"kubernetes.io/projected/d95c31a9-2573-4ac0-8513-5b0889aeb289-kube-api-access-j24fd\") pod \"nova-cell1-db-create-66z58\" (UID: \"d95c31a9-2573-4ac0-8513-5b0889aeb289\") " pod="openstack/nova-cell1-db-create-66z58" Mar 13 11:14:32.527971 master-0 kubenswrapper[33013]: I0313 11:14:32.527920 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d95c31a9-2573-4ac0-8513-5b0889aeb289-operator-scripts\") pod \"nova-cell1-db-create-66z58\" (UID: \"d95c31a9-2573-4ac0-8513-5b0889aeb289\") " pod="openstack/nova-cell1-db-create-66z58" Mar 13 11:14:32.531052 master-0 kubenswrapper[33013]: I0313 11:14:32.530924 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf8d6097-bf79-447e-bcea-f17f3ff4f62a-operator-scripts\") pod \"nova-api-7023-account-create-update-lfxrb\" (UID: \"cf8d6097-bf79-447e-bcea-f17f3ff4f62a\") " pod="openstack/nova-api-7023-account-create-update-lfxrb" Mar 13 11:14:32.532437 master-0 kubenswrapper[33013]: I0313 11:14:32.532407 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d95c31a9-2573-4ac0-8513-5b0889aeb289-operator-scripts\") pod \"nova-cell1-db-create-66z58\" (UID: \"d95c31a9-2573-4ac0-8513-5b0889aeb289\") " pod="openstack/nova-cell1-db-create-66z58" Mar 13 11:14:32.551332 master-0 kubenswrapper[33013]: I0313 11:14:32.549763 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvrvc\" (UniqueName: \"kubernetes.io/projected/cf8d6097-bf79-447e-bcea-f17f3ff4f62a-kube-api-access-wvrvc\") pod \"nova-api-7023-account-create-update-lfxrb\" (UID: \"cf8d6097-bf79-447e-bcea-f17f3ff4f62a\") " pod="openstack/nova-api-7023-account-create-update-lfxrb" Mar 13 11:14:32.555667 master-0 kubenswrapper[33013]: I0313 11:14:32.555402 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j24fd\" (UniqueName: \"kubernetes.io/projected/d95c31a9-2573-4ac0-8513-5b0889aeb289-kube-api-access-j24fd\") pod \"nova-cell1-db-create-66z58\" (UID: \"d95c31a9-2573-4ac0-8513-5b0889aeb289\") " pod="openstack/nova-cell1-db-create-66z58" Mar 13 11:14:32.565247 master-0 kubenswrapper[33013]: I0313 11:14:32.564296 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-982f-account-create-update-bn86g"] Mar 13 11:14:32.574929 master-0 kubenswrapper[33013]: I0313 11:14:32.566970 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-982f-account-create-update-bn86g" Mar 13 11:14:32.574929 master-0 kubenswrapper[33013]: I0313 11:14:32.569411 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7023-account-create-update-lfxrb" Mar 13 11:14:32.574929 master-0 kubenswrapper[33013]: I0313 11:14:32.569934 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 13 11:14:32.590343 master-0 kubenswrapper[33013]: I0313 11:14:32.588884 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-982f-account-create-update-bn86g"] Mar 13 11:14:32.631743 master-0 kubenswrapper[33013]: E0313 11:14:32.631670 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" cmd=["/bin/true"] Mar 13 11:14:32.632690 master-0 kubenswrapper[33013]: I0313 11:14:32.632299 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd287\" (UniqueName: \"kubernetes.io/projected/efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7-kube-api-access-zd287\") pod \"nova-cell1-982f-account-create-update-bn86g\" (UID: \"efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7\") " pod="openstack/nova-cell1-982f-account-create-update-bn86g" Mar 13 11:14:32.632690 master-0 kubenswrapper[33013]: I0313 11:14:32.632445 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aea14ff-34fc-4c33-940e-d438ef8f2bd9-operator-scripts\") pod \"nova-cell0-6ea2-account-create-update-hg4j9\" (UID: \"8aea14ff-34fc-4c33-940e-d438ef8f2bd9\") " pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" Mar 13 11:14:32.632941 master-0 kubenswrapper[33013]: I0313 11:14:32.632665 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7-operator-scripts\") pod \"nova-cell1-982f-account-create-update-bn86g\" (UID: \"efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7\") " pod="openstack/nova-cell1-982f-account-create-update-bn86g" Mar 13 11:14:32.632941 master-0 kubenswrapper[33013]: I0313 11:14:32.632840 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tlkv\" (UniqueName: \"kubernetes.io/projected/8aea14ff-34fc-4c33-940e-d438ef8f2bd9-kube-api-access-9tlkv\") pod \"nova-cell0-6ea2-account-create-update-hg4j9\" (UID: \"8aea14ff-34fc-4c33-940e-d438ef8f2bd9\") " pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" Mar 13 11:14:32.633097 master-0 kubenswrapper[33013]: E0313 11:14:32.631952 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" cmd=["/bin/true"] Mar 13 11:14:32.633671 master-0 kubenswrapper[33013]: I0313 11:14:32.633388 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aea14ff-34fc-4c33-940e-d438ef8f2bd9-operator-scripts\") pod \"nova-cell0-6ea2-account-create-update-hg4j9\" (UID: \"8aea14ff-34fc-4c33-940e-d438ef8f2bd9\") " pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" Mar 13 11:14:32.634989 master-0 kubenswrapper[33013]: E0313 11:14:32.634883 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" cmd=["/bin/true"] Mar 13 11:14:32.635195 master-0 kubenswrapper[33013]: E0313 11:14:32.634959 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" cmd=["/bin/true"] Mar 13 11:14:32.635795 master-0 kubenswrapper[33013]: E0313 11:14:32.635736 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" cmd=["/bin/true"] Mar 13 11:14:32.636363 master-0 kubenswrapper[33013]: E0313 11:14:32.635816 33013 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" podUID="3b99f02d-f8e2-497b-b68b-8e445e7b7541" containerName="ironic-neutron-agent" Mar 13 11:14:32.637039 master-0 kubenswrapper[33013]: E0313 11:14:32.636987 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" cmd=["/bin/true"] Mar 13 11:14:32.637250 master-0 kubenswrapper[33013]: E0313 11:14:32.637206 33013 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" probeType="Readiness" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" podUID="3b99f02d-f8e2-497b-b68b-8e445e7b7541" containerName="ironic-neutron-agent" Mar 13 11:14:32.656485 master-0 kubenswrapper[33013]: I0313 11:14:32.656441 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tlkv\" (UniqueName: \"kubernetes.io/projected/8aea14ff-34fc-4c33-940e-d438ef8f2bd9-kube-api-access-9tlkv\") pod \"nova-cell0-6ea2-account-create-update-hg4j9\" (UID: \"8aea14ff-34fc-4c33-940e-d438ef8f2bd9\") " pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" Mar 13 11:14:32.699570 master-0 kubenswrapper[33013]: I0313 11:14:32.699495 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-66z58" Mar 13 11:14:32.724977 master-0 kubenswrapper[33013]: I0313 11:14:32.724923 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bjft7" Mar 13 11:14:32.750181 master-0 kubenswrapper[33013]: I0313 11:14:32.750123 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="142b9e51-cb04-42ce-b6f5-b0554d9585a2" path="/var/lib/kubelet/pods/142b9e51-cb04-42ce-b6f5-b0554d9585a2/volumes" Mar 13 11:14:32.752203 master-0 kubenswrapper[33013]: I0313 11:14:32.752140 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7-operator-scripts\") pod \"nova-cell1-982f-account-create-update-bn86g\" (UID: \"efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7\") " pod="openstack/nova-cell1-982f-account-create-update-bn86g" Mar 13 11:14:32.752568 master-0 kubenswrapper[33013]: I0313 11:14:32.752543 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd287\" (UniqueName: \"kubernetes.io/projected/efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7-kube-api-access-zd287\") pod \"nova-cell1-982f-account-create-update-bn86g\" (UID: \"efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7\") " pod="openstack/nova-cell1-982f-account-create-update-bn86g" Mar 13 11:14:32.754006 master-0 kubenswrapper[33013]: I0313 11:14:32.752931 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7-operator-scripts\") pod \"nova-cell1-982f-account-create-update-bn86g\" (UID: \"efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7\") " pod="openstack/nova-cell1-982f-account-create-update-bn86g" Mar 13 11:14:32.771762 master-0 kubenswrapper[33013]: I0313 11:14:32.771643 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd287\" (UniqueName: \"kubernetes.io/projected/efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7-kube-api-access-zd287\") pod \"nova-cell1-982f-account-create-update-bn86g\" (UID: \"efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7\") " pod="openstack/nova-cell1-982f-account-create-update-bn86g" Mar 13 11:14:32.841457 master-0 kubenswrapper[33013]: I0313 11:14:32.840737 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" Mar 13 11:14:32.961148 master-0 kubenswrapper[33013]: I0313 11:14:32.960888 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-982f-account-create-update-bn86g" Mar 13 11:14:37.636617 master-0 kubenswrapper[33013]: E0313 11:14:37.635750 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" cmd=["/bin/true"] Mar 13 11:14:37.636617 master-0 kubenswrapper[33013]: E0313 11:14:37.635890 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" cmd=["/bin/true"] Mar 13 11:14:37.636617 master-0 kubenswrapper[33013]: E0313 11:14:37.636513 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" cmd=["/bin/true"] Mar 13 11:14:37.636617 master-0 kubenswrapper[33013]: E0313 11:14:37.636573 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" cmd=["/bin/true"] Mar 13 11:14:37.637718 master-0 kubenswrapper[33013]: E0313 11:14:37.636916 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" cmd=["/bin/true"] Mar 13 11:14:37.637718 master-0 kubenswrapper[33013]: E0313 11:14:37.636946 33013 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" probeType="Readiness" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" podUID="3b99f02d-f8e2-497b-b68b-8e445e7b7541" containerName="ironic-neutron-agent" Mar 13 11:14:37.637718 master-0 kubenswrapper[33013]: E0313 11:14:37.636997 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" cmd=["/bin/true"] Mar 13 11:14:37.637718 master-0 kubenswrapper[33013]: E0313 11:14:37.637012 33013 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" podUID="3b99f02d-f8e2-497b-b68b-8e445e7b7541" containerName="ironic-neutron-agent" Mar 13 11:14:38.827508 master-0 kubenswrapper[33013]: I0313 11:14:38.827092 33013 scope.go:117] "RemoveContainer" containerID="1493f263662fa53e88cc5d65bedcd38bf9d16f06a98ec33e3cf9ff571b46d841" Mar 13 11:14:38.913981 master-0 kubenswrapper[33013]: I0313 11:14:38.913916 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:39.137672 master-0 kubenswrapper[33013]: I0313 11:14:39.137574 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0341ff60-4819-448f-98f7-4ee8216d5d39-logs\") pod \"0341ff60-4819-448f-98f7-4ee8216d5d39\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " Mar 13 11:14:39.137982 master-0 kubenswrapper[33013]: I0313 11:14:39.137703 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0341ff60-4819-448f-98f7-4ee8216d5d39-etc-podinfo\") pod \"0341ff60-4819-448f-98f7-4ee8216d5d39\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " Mar 13 11:14:39.137982 master-0 kubenswrapper[33013]: I0313 11:14:39.137763 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-scripts\") pod \"0341ff60-4819-448f-98f7-4ee8216d5d39\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " Mar 13 11:14:39.137982 master-0 kubenswrapper[33013]: I0313 11:14:39.137816 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cbsx\" (UniqueName: \"kubernetes.io/projected/0341ff60-4819-448f-98f7-4ee8216d5d39-kube-api-access-9cbsx\") pod \"0341ff60-4819-448f-98f7-4ee8216d5d39\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " Mar 13 11:14:39.137982 master-0 kubenswrapper[33013]: I0313 11:14:39.137888 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data-merged\") pod \"0341ff60-4819-448f-98f7-4ee8216d5d39\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " Mar 13 11:14:39.137982 master-0 kubenswrapper[33013]: I0313 11:14:39.137958 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data-custom\") pod \"0341ff60-4819-448f-98f7-4ee8216d5d39\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " Mar 13 11:14:39.139520 master-0 kubenswrapper[33013]: I0313 11:14:39.139453 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0341ff60-4819-448f-98f7-4ee8216d5d39-logs" (OuterVolumeSpecName: "logs") pod "0341ff60-4819-448f-98f7-4ee8216d5d39" (UID: "0341ff60-4819-448f-98f7-4ee8216d5d39"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:14:39.145018 master-0 kubenswrapper[33013]: I0313 11:14:39.142292 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-scripts" (OuterVolumeSpecName: "scripts") pod "0341ff60-4819-448f-98f7-4ee8216d5d39" (UID: "0341ff60-4819-448f-98f7-4ee8216d5d39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:39.145018 master-0 kubenswrapper[33013]: I0313 11:14:39.142671 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0341ff60-4819-448f-98f7-4ee8216d5d39-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "0341ff60-4819-448f-98f7-4ee8216d5d39" (UID: "0341ff60-4819-448f-98f7-4ee8216d5d39"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 13 11:14:39.145757 master-0 kubenswrapper[33013]: I0313 11:14:39.145664 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "0341ff60-4819-448f-98f7-4ee8216d5d39" (UID: "0341ff60-4819-448f-98f7-4ee8216d5d39"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:14:39.146419 master-0 kubenswrapper[33013]: I0313 11:14:39.146038 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0341ff60-4819-448f-98f7-4ee8216d5d39" (UID: "0341ff60-4819-448f-98f7-4ee8216d5d39"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:39.147413 master-0 kubenswrapper[33013]: I0313 11:14:39.147366 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0341ff60-4819-448f-98f7-4ee8216d5d39-kube-api-access-9cbsx" (OuterVolumeSpecName: "kube-api-access-9cbsx") pod "0341ff60-4819-448f-98f7-4ee8216d5d39" (UID: "0341ff60-4819-448f-98f7-4ee8216d5d39"). InnerVolumeSpecName "kube-api-access-9cbsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:39.240672 master-0 kubenswrapper[33013]: I0313 11:14:39.240603 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-combined-ca-bundle\") pod \"0341ff60-4819-448f-98f7-4ee8216d5d39\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " Mar 13 11:14:39.240672 master-0 kubenswrapper[33013]: I0313 11:14:39.240685 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data\") pod \"0341ff60-4819-448f-98f7-4ee8216d5d39\" (UID: \"0341ff60-4819-448f-98f7-4ee8216d5d39\") " Mar 13 11:14:39.246524 master-0 kubenswrapper[33013]: I0313 11:14:39.242778 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0341ff60-4819-448f-98f7-4ee8216d5d39-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:39.246524 master-0 kubenswrapper[33013]: I0313 11:14:39.242820 33013 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0341ff60-4819-448f-98f7-4ee8216d5d39-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:39.246524 master-0 kubenswrapper[33013]: I0313 11:14:39.242840 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:39.246524 master-0 kubenswrapper[33013]: I0313 11:14:39.242854 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cbsx\" (UniqueName: \"kubernetes.io/projected/0341ff60-4819-448f-98f7-4ee8216d5d39-kube-api-access-9cbsx\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:39.246524 master-0 kubenswrapper[33013]: I0313 11:14:39.242869 33013 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:39.246524 master-0 kubenswrapper[33013]: I0313 11:14:39.242881 33013 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:39.380255 master-0 kubenswrapper[33013]: I0313 11:14:39.379767 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data" (OuterVolumeSpecName: "config-data") pod "0341ff60-4819-448f-98f7-4ee8216d5d39" (UID: "0341ff60-4819-448f-98f7-4ee8216d5d39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:39.463193 master-0 kubenswrapper[33013]: I0313 11:14:39.463112 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:39.512379 master-0 kubenswrapper[33013]: I0313 11:14:39.512289 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0341ff60-4819-448f-98f7-4ee8216d5d39" (UID: "0341ff60-4819-448f-98f7-4ee8216d5d39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:39.582255 master-0 kubenswrapper[33013]: I0313 11:14:39.582188 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0341ff60-4819-448f-98f7-4ee8216d5d39-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:39.586579 master-0 kubenswrapper[33013]: I0313 11:14:39.586217 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tjt8v"] Mar 13 11:14:39.654718 master-0 kubenswrapper[33013]: I0313 11:14:39.654662 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7657b6885c-5c572" event={"ID":"0341ff60-4819-448f-98f7-4ee8216d5d39","Type":"ContainerDied","Data":"31df7c8ccb67e3a199b7e432cff02938c0487cd96e8929492a7791ae6ddb110e"} Mar 13 11:14:39.655328 master-0 kubenswrapper[33013]: I0313 11:14:39.654740 33013 scope.go:117] "RemoveContainer" containerID="e67f5bd85896c51d4994dea3411f9ed42ea5cd21ebd9ed42af3b5df24f197d91" Mar 13 11:14:39.655456 master-0 kubenswrapper[33013]: I0313 11:14:39.655435 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7657b6885c-5c572" Mar 13 11:14:39.667084 master-0 kubenswrapper[33013]: I0313 11:14:39.667020 33013 generic.go:334] "Generic (PLEG): container finished" podID="3b99f02d-f8e2-497b-b68b-8e445e7b7541" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" exitCode=1 Mar 13 11:14:39.667084 master-0 kubenswrapper[33013]: I0313 11:14:39.667086 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" event={"ID":"3b99f02d-f8e2-497b-b68b-8e445e7b7541","Type":"ContainerDied","Data":"2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef"} Mar 13 11:14:39.668198 master-0 kubenswrapper[33013]: I0313 11:14:39.668156 33013 scope.go:117] "RemoveContainer" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" Mar 13 11:14:39.668496 master-0 kubenswrapper[33013]: E0313 11:14:39.668458 33013 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-7f9d77888-kwqwh_openstack(3b99f02d-f8e2-497b-b68b-8e445e7b7541)\"" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" podUID="3b99f02d-f8e2-497b-b68b-8e445e7b7541" Mar 13 11:14:39.758521 master-0 kubenswrapper[33013]: I0313 11:14:39.758462 33013 scope.go:117] "RemoveContainer" containerID="67449c2217974697072bb81c294df0ce3c706dedfcc619e89cdbce71bd4a721f" Mar 13 11:14:39.771356 master-0 kubenswrapper[33013]: I0313 11:14:39.771291 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-c8594cc5d-xblpj"] Mar 13 11:14:39.998253 master-0 kubenswrapper[33013]: I0313 11:14:39.998159 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bjft7"] Mar 13 11:14:40.022684 master-0 kubenswrapper[33013]: I0313 11:14:40.022528 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-7657b6885c-5c572"] Mar 13 11:14:40.059261 master-0 kubenswrapper[33013]: I0313 11:14:40.054292 33013 scope.go:117] "RemoveContainer" containerID="711ee6050a695ebff3bfa2dbf73de3d552785202230091172158e03672ec0c6a" Mar 13 11:14:40.066856 master-0 kubenswrapper[33013]: I0313 11:14:40.059910 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-7657b6885c-5c572"] Mar 13 11:14:40.124409 master-0 kubenswrapper[33013]: I0313 11:14:40.124342 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-66z58"] Mar 13 11:14:40.171753 master-0 kubenswrapper[33013]: W0313 11:14:40.169693 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd95c31a9_2573_4ac0_8513_5b0889aeb289.slice/crio-a31a038100245dacfb5b2c8fa0ad2c678d2ccc21aa34f9e589984d393f568ea4 WatchSource:0}: Error finding container a31a038100245dacfb5b2c8fa0ad2c678d2ccc21aa34f9e589984d393f568ea4: Status 404 returned error can't find the container with id a31a038100245dacfb5b2c8fa0ad2c678d2ccc21aa34f9e589984d393f568ea4 Mar 13 11:14:40.209051 master-0 kubenswrapper[33013]: I0313 11:14:40.205551 33013 scope.go:117] "RemoveContainer" containerID="d28239b3ecad677258f0a8ab2dcbfcbf225e5fc929b3f5b896bdf17cf2238051" Mar 13 11:14:40.454294 master-0 kubenswrapper[33013]: I0313 11:14:40.454186 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-982f-account-create-update-bn86g"] Mar 13 11:14:40.477532 master-0 kubenswrapper[33013]: W0313 11:14:40.477489 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf8d6097_bf79_447e_bcea_f17f3ff4f62a.slice/crio-38b88dc5e99c056bb5b2c26e865b1d3615c4b83aa0eeb5c3878fef92eb91cffa WatchSource:0}: Error finding container 38b88dc5e99c056bb5b2c26e865b1d3615c4b83aa0eeb5c3878fef92eb91cffa: Status 404 returned error can't find the container with id 38b88dc5e99c056bb5b2c26e865b1d3615c4b83aa0eeb5c3878fef92eb91cffa Mar 13 11:14:40.479786 master-0 kubenswrapper[33013]: I0313 11:14:40.479753 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7023-account-create-update-lfxrb"] Mar 13 11:14:40.521683 master-0 kubenswrapper[33013]: W0313 11:14:40.521617 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8aea14ff_34fc_4c33_940e_d438ef8f2bd9.slice/crio-9c5af260129bab0f8065af0791590049822f5c23017e3dc0177fa776815bef70 WatchSource:0}: Error finding container 9c5af260129bab0f8065af0791590049822f5c23017e3dc0177fa776815bef70: Status 404 returned error can't find the container with id 9c5af260129bab0f8065af0791590049822f5c23017e3dc0177fa776815bef70 Mar 13 11:14:40.524473 master-0 kubenswrapper[33013]: I0313 11:14:40.520090 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6ea2-account-create-update-hg4j9"] Mar 13 11:14:40.753103 master-0 kubenswrapper[33013]: I0313 11:14:40.752556 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" path="/var/lib/kubelet/pods/0341ff60-4819-448f-98f7-4ee8216d5d39/volumes" Mar 13 11:14:40.770546 master-0 kubenswrapper[33013]: I0313 11:14:40.770464 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-982f-account-create-update-bn86g" event={"ID":"efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7","Type":"ContainerStarted","Data":"532a529655f078afa3f8d438abd2262d2a0e33f2cd3ed32e3d4d18660f352e03"} Mar 13 11:14:40.821454 master-0 kubenswrapper[33013]: I0313 11:14:40.820334 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" event={"ID":"8aea14ff-34fc-4c33-940e-d438ef8f2bd9","Type":"ContainerStarted","Data":"9c5af260129bab0f8065af0791590049822f5c23017e3dc0177fa776815bef70"} Mar 13 11:14:40.853407 master-0 kubenswrapper[33013]: I0313 11:14:40.853338 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7023-account-create-update-lfxrb" event={"ID":"cf8d6097-bf79-447e-bcea-f17f3ff4f62a","Type":"ContainerStarted","Data":"38b88dc5e99c056bb5b2c26e865b1d3615c4b83aa0eeb5c3878fef92eb91cffa"} Mar 13 11:14:40.872131 master-0 kubenswrapper[33013]: I0313 11:14:40.871631 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c8594cc5d-xblpj" event={"ID":"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a","Type":"ContainerStarted","Data":"8919ca17238ab8ad5f2549572d97fb2e55004785e5f229301c9fb0220313b918"} Mar 13 11:14:40.872131 master-0 kubenswrapper[33013]: I0313 11:14:40.871723 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c8594cc5d-xblpj" event={"ID":"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a","Type":"ContainerStarted","Data":"2d59384ca9b4a39a3310ec9010f1ee423784f7dd24f5c8ca2dfc0711b0f66269"} Mar 13 11:14:40.875952 master-0 kubenswrapper[33013]: I0313 11:14:40.875923 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-fxhxg" event={"ID":"68f912cc-d199-4a01-bec5-765cc17824bb","Type":"ContainerStarted","Data":"84e69e9acefe53f167b6f955bd0094cd028100c0bc2ac3c31b19041e22f0f0d1"} Mar 13 11:14:40.881142 master-0 kubenswrapper[33013]: I0313 11:14:40.881051 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-66z58" event={"ID":"d95c31a9-2573-4ac0-8513-5b0889aeb289","Type":"ContainerStarted","Data":"625ca0b1dfbcdab65b4ac00ec5a702c749dbfa832c7273b90a6b0bd4e6b28a86"} Mar 13 11:14:40.881319 master-0 kubenswrapper[33013]: I0313 11:14:40.881304 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-66z58" event={"ID":"d95c31a9-2573-4ac0-8513-5b0889aeb289","Type":"ContainerStarted","Data":"a31a038100245dacfb5b2c8fa0ad2c678d2ccc21aa34f9e589984d393f568ea4"} Mar 13 11:14:40.886270 master-0 kubenswrapper[33013]: I0313 11:14:40.886221 33013 generic.go:334] "Generic (PLEG): container finished" podID="5b95da59-1633-4171-a92c-e192d65465f4" containerID="b47e5bea19585985d437f3cffcd447a93d3b09d3d17635a93baaa17eda1773e8" exitCode=0 Mar 13 11:14:40.886497 master-0 kubenswrapper[33013]: I0313 11:14:40.886416 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tjt8v" event={"ID":"5b95da59-1633-4171-a92c-e192d65465f4","Type":"ContainerDied","Data":"b47e5bea19585985d437f3cffcd447a93d3b09d3d17635a93baaa17eda1773e8"} Mar 13 11:14:40.886563 master-0 kubenswrapper[33013]: I0313 11:14:40.886504 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tjt8v" event={"ID":"5b95da59-1633-4171-a92c-e192d65465f4","Type":"ContainerStarted","Data":"6d39bef9556042e4f9f798532cab8c2249f5e29e16203a64efd30d2ecd41f43e"} Mar 13 11:14:40.890224 master-0 kubenswrapper[33013]: I0313 11:14:40.890178 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bjft7" event={"ID":"b2f57185-3e58-4df7-9593-cfaf45287839","Type":"ContainerStarted","Data":"c66d939fd13793e2039c6217ac216c2ff954d70c1140f5662ea216436e0e4052"} Mar 13 11:14:40.890346 master-0 kubenswrapper[33013]: I0313 11:14:40.890331 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bjft7" event={"ID":"b2f57185-3e58-4df7-9593-cfaf45287839","Type":"ContainerStarted","Data":"e79b0d5ed887a1d89de839603e274f601c86cfa248cb60ba1d10df0ae3efb4db"} Mar 13 11:14:40.907943 master-0 kubenswrapper[33013]: I0313 11:14:40.902940 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-fxhxg" podStartSLOduration=3.4602484000000002 podStartE2EDuration="18.902904005s" podCreationTimestamp="2026-03-13 11:14:22 +0000 UTC" firstStartedPulling="2026-03-13 11:14:23.512755174 +0000 UTC m=+1046.988708523" lastFinishedPulling="2026-03-13 11:14:38.955410759 +0000 UTC m=+1062.431364128" observedRunningTime="2026-03-13 11:14:40.895376279 +0000 UTC m=+1064.371329628" watchObservedRunningTime="2026-03-13 11:14:40.902904005 +0000 UTC m=+1064.378857354" Mar 13 11:14:40.908563 master-0 kubenswrapper[33013]: I0313 11:14:40.908440 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"62dc5405-5c84-43a7-9b0d-400716bf7ab4","Type":"ContainerStarted","Data":"3e167f15a1cd1149c7d5aeb393c659707dc82f0567fb36d0ecc265757f08a10c"} Mar 13 11:14:41.045348 master-0 kubenswrapper[33013]: I0313 11:14:41.045227 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.715127527 podStartE2EDuration="23.045206567s" podCreationTimestamp="2026-03-13 11:14:18 +0000 UTC" firstStartedPulling="2026-03-13 11:14:19.894142171 +0000 UTC m=+1043.370095520" lastFinishedPulling="2026-03-13 11:14:39.224221211 +0000 UTC m=+1062.700174560" observedRunningTime="2026-03-13 11:14:41.014147378 +0000 UTC m=+1064.490100727" watchObservedRunningTime="2026-03-13 11:14:41.045206567 +0000 UTC m=+1064.521159916" Mar 13 11:14:42.020892 master-0 kubenswrapper[33013]: I0313 11:14:42.020818 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7023-account-create-update-lfxrb" event={"ID":"cf8d6097-bf79-447e-bcea-f17f3ff4f62a","Type":"ContainerStarted","Data":"0babab3264bee164dcde647a57290468ffda77912ccbf0f4a19a81de9b6fe757"} Mar 13 11:14:42.025282 master-0 kubenswrapper[33013]: I0313 11:14:42.025237 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c8594cc5d-xblpj" event={"ID":"1c519b8e-ceb8-4775-b2b0-76a825ca7a9a","Type":"ContainerStarted","Data":"44f0c18378462c4bca54c28d9cfbfec3ad29ebb75cac51d04cf7b5f232b8fdae"} Mar 13 11:14:42.025446 master-0 kubenswrapper[33013]: I0313 11:14:42.025380 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:42.025507 master-0 kubenswrapper[33013]: I0313 11:14:42.025454 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:42.028685 master-0 kubenswrapper[33013]: I0313 11:14:42.028635 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-982f-account-create-update-bn86g" event={"ID":"efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7","Type":"ContainerStarted","Data":"3fa2353417ab56957ce19e47845f8b3d08863b7340662a8120cdbf906343b4fe"} Mar 13 11:14:42.031672 master-0 kubenswrapper[33013]: I0313 11:14:42.031299 33013 generic.go:334] "Generic (PLEG): container finished" podID="d95c31a9-2573-4ac0-8513-5b0889aeb289" containerID="625ca0b1dfbcdab65b4ac00ec5a702c749dbfa832c7273b90a6b0bd4e6b28a86" exitCode=0 Mar 13 11:14:42.031672 master-0 kubenswrapper[33013]: I0313 11:14:42.031375 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-66z58" event={"ID":"d95c31a9-2573-4ac0-8513-5b0889aeb289","Type":"ContainerDied","Data":"625ca0b1dfbcdab65b4ac00ec5a702c749dbfa832c7273b90a6b0bd4e6b28a86"} Mar 13 11:14:42.036036 master-0 kubenswrapper[33013]: I0313 11:14:42.036009 33013 generic.go:334] "Generic (PLEG): container finished" podID="b2f57185-3e58-4df7-9593-cfaf45287839" containerID="c66d939fd13793e2039c6217ac216c2ff954d70c1140f5662ea216436e0e4052" exitCode=0 Mar 13 11:14:42.036140 master-0 kubenswrapper[33013]: I0313 11:14:42.036061 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bjft7" event={"ID":"b2f57185-3e58-4df7-9593-cfaf45287839","Type":"ContainerDied","Data":"c66d939fd13793e2039c6217ac216c2ff954d70c1140f5662ea216436e0e4052"} Mar 13 11:14:42.039029 master-0 kubenswrapper[33013]: I0313 11:14:42.038981 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" event={"ID":"8aea14ff-34fc-4c33-940e-d438ef8f2bd9","Type":"ContainerStarted","Data":"51fe1a2e91d613021fd8a27eac0069246a1c85357b381971a6da6c97637c865a"} Mar 13 11:14:42.452209 master-0 kubenswrapper[33013]: I0313 11:14:42.452123 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-7023-account-create-update-lfxrb" podStartSLOduration=10.452100884 podStartE2EDuration="10.452100884s" podCreationTimestamp="2026-03-13 11:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:14:42.44916902 +0000 UTC m=+1065.925122369" watchObservedRunningTime="2026-03-13 11:14:42.452100884 +0000 UTC m=+1065.928054223" Mar 13 11:14:42.549642 master-0 kubenswrapper[33013]: I0313 11:14:42.549518 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-982f-account-create-update-bn86g" podStartSLOduration=10.549495781 podStartE2EDuration="10.549495781s" podCreationTimestamp="2026-03-13 11:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:14:42.476861602 +0000 UTC m=+1065.952814951" watchObservedRunningTime="2026-03-13 11:14:42.549495781 +0000 UTC m=+1066.025449130" Mar 13 11:14:42.601356 master-0 kubenswrapper[33013]: I0313 11:14:42.601283 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" podStartSLOduration=10.601265972 podStartE2EDuration="10.601265972s" podCreationTimestamp="2026-03-13 11:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:14:42.53234283 +0000 UTC m=+1066.008296179" watchObservedRunningTime="2026-03-13 11:14:42.601265972 +0000 UTC m=+1066.077219321" Mar 13 11:14:42.605772 master-0 kubenswrapper[33013]: I0313 11:14:42.605713 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-c8594cc5d-xblpj" podStartSLOduration=16.605699779 podStartE2EDuration="16.605699779s" podCreationTimestamp="2026-03-13 11:14:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:14:42.559020883 +0000 UTC m=+1066.034974232" watchObservedRunningTime="2026-03-13 11:14:42.605699779 +0000 UTC m=+1066.081653128" Mar 13 11:14:42.620213 master-0 kubenswrapper[33013]: I0313 11:14:42.620125 33013 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:42.620213 master-0 kubenswrapper[33013]: I0313 11:14:42.620207 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:42.621820 master-0 kubenswrapper[33013]: I0313 11:14:42.621783 33013 scope.go:117] "RemoveContainer" containerID="2ec5e4187d748bfcb0f69acf7f1418032f392828791caecd577673d2b26f01ef" Mar 13 11:14:46.678278 master-0 kubenswrapper[33013]: I0313 11:14:46.677778 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:46.679622 master-0 kubenswrapper[33013]: I0313 11:14:46.679552 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-c8594cc5d-xblpj" Mar 13 11:14:47.205549 master-0 kubenswrapper[33013]: I0313 11:14:47.205305 33013 generic.go:334] "Generic (PLEG): container finished" podID="cf8d6097-bf79-447e-bcea-f17f3ff4f62a" containerID="0babab3264bee164dcde647a57290468ffda77912ccbf0f4a19a81de9b6fe757" exitCode=0 Mar 13 11:14:47.205549 master-0 kubenswrapper[33013]: I0313 11:14:47.205407 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7023-account-create-update-lfxrb" event={"ID":"cf8d6097-bf79-447e-bcea-f17f3ff4f62a","Type":"ContainerDied","Data":"0babab3264bee164dcde647a57290468ffda77912ccbf0f4a19a81de9b6fe757"} Mar 13 11:14:47.213511 master-0 kubenswrapper[33013]: I0313 11:14:47.213446 33013 generic.go:334] "Generic (PLEG): container finished" podID="68f912cc-d199-4a01-bec5-765cc17824bb" containerID="84e69e9acefe53f167b6f955bd0094cd028100c0bc2ac3c31b19041e22f0f0d1" exitCode=0 Mar 13 11:14:47.213828 master-0 kubenswrapper[33013]: I0313 11:14:47.213526 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-fxhxg" event={"ID":"68f912cc-d199-4a01-bec5-765cc17824bb","Type":"ContainerDied","Data":"84e69e9acefe53f167b6f955bd0094cd028100c0bc2ac3c31b19041e22f0f0d1"} Mar 13 11:14:47.222457 master-0 kubenswrapper[33013]: I0313 11:14:47.222390 33013 generic.go:334] "Generic (PLEG): container finished" podID="efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7" containerID="3fa2353417ab56957ce19e47845f8b3d08863b7340662a8120cdbf906343b4fe" exitCode=0 Mar 13 11:14:47.222675 master-0 kubenswrapper[33013]: I0313 11:14:47.222646 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-982f-account-create-update-bn86g" event={"ID":"efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7","Type":"ContainerDied","Data":"3fa2353417ab56957ce19e47845f8b3d08863b7340662a8120cdbf906343b4fe"} Mar 13 11:14:47.226018 master-0 kubenswrapper[33013]: I0313 11:14:47.225960 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-66z58" event={"ID":"d95c31a9-2573-4ac0-8513-5b0889aeb289","Type":"ContainerDied","Data":"a31a038100245dacfb5b2c8fa0ad2c678d2ccc21aa34f9e589984d393f568ea4"} Mar 13 11:14:47.226096 master-0 kubenswrapper[33013]: I0313 11:14:47.226029 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a31a038100245dacfb5b2c8fa0ad2c678d2ccc21aa34f9e589984d393f568ea4" Mar 13 11:14:47.236675 master-0 kubenswrapper[33013]: I0313 11:14:47.236459 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tjt8v" event={"ID":"5b95da59-1633-4171-a92c-e192d65465f4","Type":"ContainerDied","Data":"6d39bef9556042e4f9f798532cab8c2249f5e29e16203a64efd30d2ecd41f43e"} Mar 13 11:14:47.236675 master-0 kubenswrapper[33013]: I0313 11:14:47.236524 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d39bef9556042e4f9f798532cab8c2249f5e29e16203a64efd30d2ecd41f43e" Mar 13 11:14:47.241118 master-0 kubenswrapper[33013]: I0313 11:14:47.241030 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bjft7" event={"ID":"b2f57185-3e58-4df7-9593-cfaf45287839","Type":"ContainerDied","Data":"e79b0d5ed887a1d89de839603e274f601c86cfa248cb60ba1d10df0ae3efb4db"} Mar 13 11:14:47.241118 master-0 kubenswrapper[33013]: I0313 11:14:47.241088 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e79b0d5ed887a1d89de839603e274f601c86cfa248cb60ba1d10df0ae3efb4db" Mar 13 11:14:47.257100 master-0 kubenswrapper[33013]: I0313 11:14:47.256774 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tjt8v" Mar 13 11:14:47.257100 master-0 kubenswrapper[33013]: I0313 11:14:47.256989 33013 generic.go:334] "Generic (PLEG): container finished" podID="8aea14ff-34fc-4c33-940e-d438ef8f2bd9" containerID="51fe1a2e91d613021fd8a27eac0069246a1c85357b381971a6da6c97637c865a" exitCode=0 Mar 13 11:14:47.257449 master-0 kubenswrapper[33013]: I0313 11:14:47.257305 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" event={"ID":"8aea14ff-34fc-4c33-940e-d438ef8f2bd9","Type":"ContainerDied","Data":"51fe1a2e91d613021fd8a27eac0069246a1c85357b381971a6da6c97637c865a"} Mar 13 11:14:47.283614 master-0 kubenswrapper[33013]: I0313 11:14:47.283403 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-66z58" Mar 13 11:14:47.310464 master-0 kubenswrapper[33013]: I0313 11:14:47.310070 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bjft7" Mar 13 11:14:47.421854 master-0 kubenswrapper[33013]: I0313 11:14:47.417701 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d95c31a9-2573-4ac0-8513-5b0889aeb289-operator-scripts\") pod \"d95c31a9-2573-4ac0-8513-5b0889aeb289\" (UID: \"d95c31a9-2573-4ac0-8513-5b0889aeb289\") " Mar 13 11:14:47.421854 master-0 kubenswrapper[33013]: I0313 11:14:47.417803 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m7c7\" (UniqueName: \"kubernetes.io/projected/5b95da59-1633-4171-a92c-e192d65465f4-kube-api-access-7m7c7\") pod \"5b95da59-1633-4171-a92c-e192d65465f4\" (UID: \"5b95da59-1633-4171-a92c-e192d65465f4\") " Mar 13 11:14:47.421854 master-0 kubenswrapper[33013]: I0313 11:14:47.418109 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j24fd\" (UniqueName: \"kubernetes.io/projected/d95c31a9-2573-4ac0-8513-5b0889aeb289-kube-api-access-j24fd\") pod \"d95c31a9-2573-4ac0-8513-5b0889aeb289\" (UID: \"d95c31a9-2573-4ac0-8513-5b0889aeb289\") " Mar 13 11:14:47.421854 master-0 kubenswrapper[33013]: I0313 11:14:47.418171 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b95da59-1633-4171-a92c-e192d65465f4-operator-scripts\") pod \"5b95da59-1633-4171-a92c-e192d65465f4\" (UID: \"5b95da59-1633-4171-a92c-e192d65465f4\") " Mar 13 11:14:47.421854 master-0 kubenswrapper[33013]: I0313 11:14:47.418283 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2f57185-3e58-4df7-9593-cfaf45287839-operator-scripts\") pod \"b2f57185-3e58-4df7-9593-cfaf45287839\" (UID: \"b2f57185-3e58-4df7-9593-cfaf45287839\") " Mar 13 11:14:47.421854 master-0 kubenswrapper[33013]: I0313 11:14:47.418313 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swgnd\" (UniqueName: \"kubernetes.io/projected/b2f57185-3e58-4df7-9593-cfaf45287839-kube-api-access-swgnd\") pod \"b2f57185-3e58-4df7-9593-cfaf45287839\" (UID: \"b2f57185-3e58-4df7-9593-cfaf45287839\") " Mar 13 11:14:47.421854 master-0 kubenswrapper[33013]: I0313 11:14:47.421357 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d95c31a9-2573-4ac0-8513-5b0889aeb289-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d95c31a9-2573-4ac0-8513-5b0889aeb289" (UID: "d95c31a9-2573-4ac0-8513-5b0889aeb289"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:47.421854 master-0 kubenswrapper[33013]: I0313 11:14:47.421358 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b95da59-1633-4171-a92c-e192d65465f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b95da59-1633-4171-a92c-e192d65465f4" (UID: "5b95da59-1633-4171-a92c-e192d65465f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:47.422442 master-0 kubenswrapper[33013]: I0313 11:14:47.422412 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2f57185-3e58-4df7-9593-cfaf45287839-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2f57185-3e58-4df7-9593-cfaf45287839" (UID: "b2f57185-3e58-4df7-9593-cfaf45287839"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:47.424102 master-0 kubenswrapper[33013]: I0313 11:14:47.423628 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2f57185-3e58-4df7-9593-cfaf45287839-kube-api-access-swgnd" (OuterVolumeSpecName: "kube-api-access-swgnd") pod "b2f57185-3e58-4df7-9593-cfaf45287839" (UID: "b2f57185-3e58-4df7-9593-cfaf45287839"). InnerVolumeSpecName "kube-api-access-swgnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:47.429949 master-0 kubenswrapper[33013]: I0313 11:14:47.429901 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95c31a9-2573-4ac0-8513-5b0889aeb289-kube-api-access-j24fd" (OuterVolumeSpecName: "kube-api-access-j24fd") pod "d95c31a9-2573-4ac0-8513-5b0889aeb289" (UID: "d95c31a9-2573-4ac0-8513-5b0889aeb289"). InnerVolumeSpecName "kube-api-access-j24fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:47.433912 master-0 kubenswrapper[33013]: I0313 11:14:47.433843 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b95da59-1633-4171-a92c-e192d65465f4-kube-api-access-7m7c7" (OuterVolumeSpecName: "kube-api-access-7m7c7") pod "5b95da59-1633-4171-a92c-e192d65465f4" (UID: "5b95da59-1633-4171-a92c-e192d65465f4"). InnerVolumeSpecName "kube-api-access-7m7c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:47.523700 master-0 kubenswrapper[33013]: I0313 11:14:47.523641 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2f57185-3e58-4df7-9593-cfaf45287839-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:47.523700 master-0 kubenswrapper[33013]: I0313 11:14:47.523697 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swgnd\" (UniqueName: \"kubernetes.io/projected/b2f57185-3e58-4df7-9593-cfaf45287839-kube-api-access-swgnd\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:47.523700 master-0 kubenswrapper[33013]: I0313 11:14:47.523712 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d95c31a9-2573-4ac0-8513-5b0889aeb289-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:47.524028 master-0 kubenswrapper[33013]: I0313 11:14:47.523725 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m7c7\" (UniqueName: \"kubernetes.io/projected/5b95da59-1633-4171-a92c-e192d65465f4-kube-api-access-7m7c7\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:47.524028 master-0 kubenswrapper[33013]: I0313 11:14:47.523737 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j24fd\" (UniqueName: \"kubernetes.io/projected/d95c31a9-2573-4ac0-8513-5b0889aeb289-kube-api-access-j24fd\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:47.524028 master-0 kubenswrapper[33013]: I0313 11:14:47.523751 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b95da59-1633-4171-a92c-e192d65465f4-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:48.278764 master-0 kubenswrapper[33013]: I0313 11:14:48.278491 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" event={"ID":"3b99f02d-f8e2-497b-b68b-8e445e7b7541","Type":"ContainerStarted","Data":"2ead165e6279335669cbfa9d2a1dfb59796d113b7bcad8ebce4329c267ef453d"} Mar 13 11:14:48.280034 master-0 kubenswrapper[33013]: I0313 11:14:48.279985 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:48.281833 master-0 kubenswrapper[33013]: I0313 11:14:48.281794 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e16baf7d-8440-4431-a184-523ae34f6e6f","Type":"ContainerStarted","Data":"1a86bbb947ba3f813513f8046926c7f0177dffe72dd44fe7cfa398dfb524f657"} Mar 13 11:14:48.282955 master-0 kubenswrapper[33013]: I0313 11:14:48.282903 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bjft7" Mar 13 11:14:48.287151 master-0 kubenswrapper[33013]: I0313 11:14:48.287116 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tjt8v" Mar 13 11:14:48.287256 master-0 kubenswrapper[33013]: I0313 11:14:48.287168 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-66z58" Mar 13 11:14:48.893610 master-0 kubenswrapper[33013]: I0313 11:14:48.892697 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-982f-account-create-update-bn86g" Mar 13 11:14:48.987491 master-0 kubenswrapper[33013]: I0313 11:14:48.983343 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7-operator-scripts\") pod \"efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7\" (UID: \"efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7\") " Mar 13 11:14:48.987491 master-0 kubenswrapper[33013]: I0313 11:14:48.983676 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd287\" (UniqueName: \"kubernetes.io/projected/efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7-kube-api-access-zd287\") pod \"efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7\" (UID: \"efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7\") " Mar 13 11:14:48.987491 master-0 kubenswrapper[33013]: I0313 11:14:48.984849 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7" (UID: "efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:48.992534 master-0 kubenswrapper[33013]: I0313 11:14:48.989650 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7-kube-api-access-zd287" (OuterVolumeSpecName: "kube-api-access-zd287") pod "efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7" (UID: "efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7"). InnerVolumeSpecName "kube-api-access-zd287". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:49.092499 master-0 kubenswrapper[33013]: I0313 11:14:49.090703 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd287\" (UniqueName: \"kubernetes.io/projected/efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7-kube-api-access-zd287\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:49.092499 master-0 kubenswrapper[33013]: I0313 11:14:49.090758 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:49.222259 master-0 kubenswrapper[33013]: I0313 11:14:49.222211 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" Mar 13 11:14:49.235784 master-0 kubenswrapper[33013]: I0313 11:14:49.235749 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:49.277955 master-0 kubenswrapper[33013]: I0313 11:14:49.275474 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7023-account-create-update-lfxrb" Mar 13 11:14:49.310499 master-0 kubenswrapper[33013]: I0313 11:14:49.310439 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" event={"ID":"8aea14ff-34fc-4c33-940e-d438ef8f2bd9","Type":"ContainerDied","Data":"9c5af260129bab0f8065af0791590049822f5c23017e3dc0177fa776815bef70"} Mar 13 11:14:49.315673 master-0 kubenswrapper[33013]: I0313 11:14:49.315623 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c5af260129bab0f8065af0791590049822f5c23017e3dc0177fa776815bef70" Mar 13 11:14:49.316216 master-0 kubenswrapper[33013]: I0313 11:14:49.316194 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6ea2-account-create-update-hg4j9" Mar 13 11:14:49.326801 master-0 kubenswrapper[33013]: I0313 11:14:49.326726 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7023-account-create-update-lfxrb" event={"ID":"cf8d6097-bf79-447e-bcea-f17f3ff4f62a","Type":"ContainerDied","Data":"38b88dc5e99c056bb5b2c26e865b1d3615c4b83aa0eeb5c3878fef92eb91cffa"} Mar 13 11:14:49.327240 master-0 kubenswrapper[33013]: I0313 11:14:49.327219 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b88dc5e99c056bb5b2c26e865b1d3615c4b83aa0eeb5c3878fef92eb91cffa" Mar 13 11:14:49.327535 master-0 kubenswrapper[33013]: I0313 11:14:49.327517 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7023-account-create-update-lfxrb" Mar 13 11:14:49.363544 master-0 kubenswrapper[33013]: I0313 11:14:49.363271 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-fxhxg" event={"ID":"68f912cc-d199-4a01-bec5-765cc17824bb","Type":"ContainerDied","Data":"346e758bd2e97e34002f75524fb36af7a92c624f29271fb277647af25327e53b"} Mar 13 11:14:49.363544 master-0 kubenswrapper[33013]: I0313 11:14:49.363324 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="346e758bd2e97e34002f75524fb36af7a92c624f29271fb277647af25327e53b" Mar 13 11:14:49.363544 master-0 kubenswrapper[33013]: I0313 11:14:49.363456 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-fxhxg" Mar 13 11:14:49.419961 master-0 kubenswrapper[33013]: I0313 11:14:49.419885 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-982f-account-create-update-bn86g" Mar 13 11:14:49.420286 master-0 kubenswrapper[33013]: I0313 11:14:49.420169 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-982f-account-create-update-bn86g" event={"ID":"efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7","Type":"ContainerDied","Data":"532a529655f078afa3f8d438abd2262d2a0e33f2cd3ed32e3d4d18660f352e03"} Mar 13 11:14:49.422746 master-0 kubenswrapper[33013]: I0313 11:14:49.422700 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="532a529655f078afa3f8d438abd2262d2a0e33f2cd3ed32e3d4d18660f352e03" Mar 13 11:14:49.436156 master-0 kubenswrapper[33013]: I0313 11:14:49.436017 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-combined-ca-bundle\") pod \"68f912cc-d199-4a01-bec5-765cc17824bb\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " Mar 13 11:14:49.436555 master-0 kubenswrapper[33013]: I0313 11:14:49.436521 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/68f912cc-d199-4a01-bec5-765cc17824bb-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"68f912cc-d199-4a01-bec5-765cc17824bb\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " Mar 13 11:14:49.436846 master-0 kubenswrapper[33013]: I0313 11:14:49.436571 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aea14ff-34fc-4c33-940e-d438ef8f2bd9-operator-scripts\") pod \"8aea14ff-34fc-4c33-940e-d438ef8f2bd9\" (UID: \"8aea14ff-34fc-4c33-940e-d438ef8f2bd9\") " Mar 13 11:14:49.436846 master-0 kubenswrapper[33013]: I0313 11:14:49.436663 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf8d6097-bf79-447e-bcea-f17f3ff4f62a-operator-scripts\") pod \"cf8d6097-bf79-447e-bcea-f17f3ff4f62a\" (UID: \"cf8d6097-bf79-447e-bcea-f17f3ff4f62a\") " Mar 13 11:14:49.436846 master-0 kubenswrapper[33013]: I0313 11:14:49.436701 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tlkv\" (UniqueName: \"kubernetes.io/projected/8aea14ff-34fc-4c33-940e-d438ef8f2bd9-kube-api-access-9tlkv\") pod \"8aea14ff-34fc-4c33-940e-d438ef8f2bd9\" (UID: \"8aea14ff-34fc-4c33-940e-d438ef8f2bd9\") " Mar 13 11:14:49.436846 master-0 kubenswrapper[33013]: I0313 11:14:49.436775 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvrvc\" (UniqueName: \"kubernetes.io/projected/cf8d6097-bf79-447e-bcea-f17f3ff4f62a-kube-api-access-wvrvc\") pod \"cf8d6097-bf79-447e-bcea-f17f3ff4f62a\" (UID: \"cf8d6097-bf79-447e-bcea-f17f3ff4f62a\") " Mar 13 11:14:49.441819 master-0 kubenswrapper[33013]: I0313 11:14:49.436884 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-scripts\") pod \"68f912cc-d199-4a01-bec5-765cc17824bb\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " Mar 13 11:14:49.441819 master-0 kubenswrapper[33013]: I0313 11:14:49.437000 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-config\") pod \"68f912cc-d199-4a01-bec5-765cc17824bb\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " Mar 13 11:14:49.441819 master-0 kubenswrapper[33013]: I0313 11:14:49.437027 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68f912cc-d199-4a01-bec5-765cc17824bb-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "68f912cc-d199-4a01-bec5-765cc17824bb" (UID: "68f912cc-d199-4a01-bec5-765cc17824bb"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:14:49.441819 master-0 kubenswrapper[33013]: I0313 11:14:49.437059 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/68f912cc-d199-4a01-bec5-765cc17824bb-var-lib-ironic\") pod \"68f912cc-d199-4a01-bec5-765cc17824bb\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " Mar 13 11:14:49.441819 master-0 kubenswrapper[33013]: I0313 11:14:49.437179 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/68f912cc-d199-4a01-bec5-765cc17824bb-etc-podinfo\") pod \"68f912cc-d199-4a01-bec5-765cc17824bb\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " Mar 13 11:14:49.441819 master-0 kubenswrapper[33013]: I0313 11:14:49.437259 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxp9q\" (UniqueName: \"kubernetes.io/projected/68f912cc-d199-4a01-bec5-765cc17824bb-kube-api-access-lxp9q\") pod \"68f912cc-d199-4a01-bec5-765cc17824bb\" (UID: \"68f912cc-d199-4a01-bec5-765cc17824bb\") " Mar 13 11:14:49.441819 master-0 kubenswrapper[33013]: I0313 11:14:49.438089 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68f912cc-d199-4a01-bec5-765cc17824bb-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "68f912cc-d199-4a01-bec5-765cc17824bb" (UID: "68f912cc-d199-4a01-bec5-765cc17824bb"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:14:49.441819 master-0 kubenswrapper[33013]: I0313 11:14:49.438465 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aea14ff-34fc-4c33-940e-d438ef8f2bd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8aea14ff-34fc-4c33-940e-d438ef8f2bd9" (UID: "8aea14ff-34fc-4c33-940e-d438ef8f2bd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:49.441819 master-0 kubenswrapper[33013]: I0313 11:14:49.438552 33013 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/68f912cc-d199-4a01-bec5-765cc17824bb-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:49.441819 master-0 kubenswrapper[33013]: I0313 11:14:49.438605 33013 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/68f912cc-d199-4a01-bec5-765cc17824bb-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:49.441819 master-0 kubenswrapper[33013]: I0313 11:14:49.439676 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8d6097-bf79-447e-bcea-f17f3ff4f62a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf8d6097-bf79-447e-bcea-f17f3ff4f62a" (UID: "cf8d6097-bf79-447e-bcea-f17f3ff4f62a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:14:49.455208 master-0 kubenswrapper[33013]: I0313 11:14:49.454790 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/68f912cc-d199-4a01-bec5-765cc17824bb-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "68f912cc-d199-4a01-bec5-765cc17824bb" (UID: "68f912cc-d199-4a01-bec5-765cc17824bb"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 13 11:14:49.460732 master-0 kubenswrapper[33013]: I0313 11:14:49.460254 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-scripts" (OuterVolumeSpecName: "scripts") pod "68f912cc-d199-4a01-bec5-765cc17824bb" (UID: "68f912cc-d199-4a01-bec5-765cc17824bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:49.461263 master-0 kubenswrapper[33013]: I0313 11:14:49.460966 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf8d6097-bf79-447e-bcea-f17f3ff4f62a-kube-api-access-wvrvc" (OuterVolumeSpecName: "kube-api-access-wvrvc") pod "cf8d6097-bf79-447e-bcea-f17f3ff4f62a" (UID: "cf8d6097-bf79-447e-bcea-f17f3ff4f62a"). InnerVolumeSpecName "kube-api-access-wvrvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:49.480389 master-0 kubenswrapper[33013]: I0313 11:14:49.472189 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aea14ff-34fc-4c33-940e-d438ef8f2bd9-kube-api-access-9tlkv" (OuterVolumeSpecName: "kube-api-access-9tlkv") pod "8aea14ff-34fc-4c33-940e-d438ef8f2bd9" (UID: "8aea14ff-34fc-4c33-940e-d438ef8f2bd9"). InnerVolumeSpecName "kube-api-access-9tlkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:49.480389 master-0 kubenswrapper[33013]: I0313 11:14:49.472374 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f912cc-d199-4a01-bec5-765cc17824bb-kube-api-access-lxp9q" (OuterVolumeSpecName: "kube-api-access-lxp9q") pod "68f912cc-d199-4a01-bec5-765cc17824bb" (UID: "68f912cc-d199-4a01-bec5-765cc17824bb"). InnerVolumeSpecName "kube-api-access-lxp9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:49.540922 master-0 kubenswrapper[33013]: I0313 11:14:49.540433 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-config" (OuterVolumeSpecName: "config") pod "68f912cc-d199-4a01-bec5-765cc17824bb" (UID: "68f912cc-d199-4a01-bec5-765cc17824bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:49.546115 master-0 kubenswrapper[33013]: I0313 11:14:49.544851 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:49.546115 master-0 kubenswrapper[33013]: I0313 11:14:49.545534 33013 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/68f912cc-d199-4a01-bec5-765cc17824bb-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:49.546115 master-0 kubenswrapper[33013]: I0313 11:14:49.545563 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxp9q\" (UniqueName: \"kubernetes.io/projected/68f912cc-d199-4a01-bec5-765cc17824bb-kube-api-access-lxp9q\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:49.546115 master-0 kubenswrapper[33013]: I0313 11:14:49.545576 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aea14ff-34fc-4c33-940e-d438ef8f2bd9-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:49.546115 master-0 kubenswrapper[33013]: I0313 11:14:49.545605 33013 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf8d6097-bf79-447e-bcea-f17f3ff4f62a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:49.546115 master-0 kubenswrapper[33013]: I0313 11:14:49.545618 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tlkv\" (UniqueName: \"kubernetes.io/projected/8aea14ff-34fc-4c33-940e-d438ef8f2bd9-kube-api-access-9tlkv\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:49.546115 master-0 kubenswrapper[33013]: I0313 11:14:49.545627 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvrvc\" (UniqueName: \"kubernetes.io/projected/cf8d6097-bf79-447e-bcea-f17f3ff4f62a-kube-api-access-wvrvc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:49.546115 master-0 kubenswrapper[33013]: I0313 11:14:49.545635 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:49.590112 master-0 kubenswrapper[33013]: I0313 11:14:49.589813 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68f912cc-d199-4a01-bec5-765cc17824bb" (UID: "68f912cc-d199-4a01-bec5-765cc17824bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:49.669963 master-0 kubenswrapper[33013]: I0313 11:14:49.669901 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bfb994cb5-6ldqw"] Mar 13 11:14:49.670401 master-0 kubenswrapper[33013]: E0313 11:14:49.670374 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerName="ironic-api" Mar 13 11:14:49.670401 master-0 kubenswrapper[33013]: I0313 11:14:49.670394 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerName="ironic-api" Mar 13 11:14:49.670476 master-0 kubenswrapper[33013]: E0313 11:14:49.670410 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95c31a9-2573-4ac0-8513-5b0889aeb289" containerName="mariadb-database-create" Mar 13 11:14:49.670476 master-0 kubenswrapper[33013]: I0313 11:14:49.670418 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95c31a9-2573-4ac0-8513-5b0889aeb289" containerName="mariadb-database-create" Mar 13 11:14:49.670476 master-0 kubenswrapper[33013]: E0313 11:14:49.670461 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7" containerName="mariadb-account-create-update" Mar 13 11:14:49.670476 master-0 kubenswrapper[33013]: I0313 11:14:49.670468 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7" containerName="mariadb-account-create-update" Mar 13 11:14:49.670782 master-0 kubenswrapper[33013]: E0313 11:14:49.670482 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f912cc-d199-4a01-bec5-765cc17824bb" containerName="ironic-inspector-db-sync" Mar 13 11:14:49.670782 master-0 kubenswrapper[33013]: I0313 11:14:49.670490 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f912cc-d199-4a01-bec5-765cc17824bb" containerName="ironic-inspector-db-sync" Mar 13 11:14:49.670782 master-0 kubenswrapper[33013]: E0313 11:14:49.670532 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerName="ironic-api-log" Mar 13 11:14:49.670782 master-0 kubenswrapper[33013]: I0313 11:14:49.670539 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerName="ironic-api-log" Mar 13 11:14:49.670782 master-0 kubenswrapper[33013]: E0313 11:14:49.670559 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f57185-3e58-4df7-9593-cfaf45287839" containerName="mariadb-database-create" Mar 13 11:14:49.670782 master-0 kubenswrapper[33013]: I0313 11:14:49.670565 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f57185-3e58-4df7-9593-cfaf45287839" containerName="mariadb-database-create" Mar 13 11:14:49.670782 master-0 kubenswrapper[33013]: E0313 11:14:49.670602 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerName="init" Mar 13 11:14:49.670782 master-0 kubenswrapper[33013]: I0313 11:14:49.670610 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerName="init" Mar 13 11:14:49.670782 master-0 kubenswrapper[33013]: E0313 11:14:49.670620 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf8d6097-bf79-447e-bcea-f17f3ff4f62a" containerName="mariadb-account-create-update" Mar 13 11:14:49.670782 master-0 kubenswrapper[33013]: I0313 11:14:49.670626 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf8d6097-bf79-447e-bcea-f17f3ff4f62a" containerName="mariadb-account-create-update" Mar 13 11:14:49.670782 master-0 kubenswrapper[33013]: E0313 11:14:49.670639 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b95da59-1633-4171-a92c-e192d65465f4" containerName="mariadb-database-create" Mar 13 11:14:49.670782 master-0 kubenswrapper[33013]: I0313 11:14:49.670646 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b95da59-1633-4171-a92c-e192d65465f4" containerName="mariadb-database-create" Mar 13 11:14:49.670782 master-0 kubenswrapper[33013]: E0313 11:14:49.670657 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aea14ff-34fc-4c33-940e-d438ef8f2bd9" containerName="mariadb-account-create-update" Mar 13 11:14:49.670782 master-0 kubenswrapper[33013]: I0313 11:14:49.670664 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aea14ff-34fc-4c33-940e-d438ef8f2bd9" containerName="mariadb-account-create-update" Mar 13 11:14:49.678819 master-0 kubenswrapper[33013]: I0313 11:14:49.670884 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f57185-3e58-4df7-9593-cfaf45287839" containerName="mariadb-database-create" Mar 13 11:14:49.678819 master-0 kubenswrapper[33013]: I0313 11:14:49.670914 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b95da59-1633-4171-a92c-e192d65465f4" containerName="mariadb-database-create" Mar 13 11:14:49.678819 master-0 kubenswrapper[33013]: I0313 11:14:49.670930 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="efeed3fc-b40a-4ac4-b14e-bae3a3fb1da7" containerName="mariadb-account-create-update" Mar 13 11:14:49.678819 master-0 kubenswrapper[33013]: I0313 11:14:49.670947 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f912cc-d199-4a01-bec5-765cc17824bb" containerName="ironic-inspector-db-sync" Mar 13 11:14:49.678819 master-0 kubenswrapper[33013]: I0313 11:14:49.670984 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerName="ironic-api-log" Mar 13 11:14:49.678819 master-0 kubenswrapper[33013]: I0313 11:14:49.670996 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf8d6097-bf79-447e-bcea-f17f3ff4f62a" containerName="mariadb-account-create-update" Mar 13 11:14:49.678819 master-0 kubenswrapper[33013]: I0313 11:14:49.671009 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerName="ironic-api" Mar 13 11:14:49.678819 master-0 kubenswrapper[33013]: I0313 11:14:49.671021 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aea14ff-34fc-4c33-940e-d438ef8f2bd9" containerName="mariadb-account-create-update" Mar 13 11:14:49.678819 master-0 kubenswrapper[33013]: I0313 11:14:49.671031 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerName="ironic-api" Mar 13 11:14:49.678819 master-0 kubenswrapper[33013]: I0313 11:14:49.671063 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="d95c31a9-2573-4ac0-8513-5b0889aeb289" containerName="mariadb-database-create" Mar 13 11:14:49.678819 master-0 kubenswrapper[33013]: E0313 11:14:49.671266 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerName="ironic-api" Mar 13 11:14:49.678819 master-0 kubenswrapper[33013]: I0313 11:14:49.671275 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="0341ff60-4819-448f-98f7-4ee8216d5d39" containerName="ironic-api" Mar 13 11:14:49.678819 master-0 kubenswrapper[33013]: I0313 11:14:49.671545 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f912cc-d199-4a01-bec5-765cc17824bb-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:49.679479 master-0 kubenswrapper[33013]: I0313 11:14:49.679345 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.773948 master-0 kubenswrapper[33013]: I0313 11:14:49.773885 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-dns-swift-storage-0\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.774176 master-0 kubenswrapper[33013]: I0313 11:14:49.773987 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jrnj\" (UniqueName: \"kubernetes.io/projected/8f3b3913-1778-4d02-8259-2968de468f92-kube-api-access-5jrnj\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.774176 master-0 kubenswrapper[33013]: I0313 11:14:49.774058 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-config\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.774176 master-0 kubenswrapper[33013]: I0313 11:14:49.774149 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-dns-svc\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.774462 master-0 kubenswrapper[33013]: I0313 11:14:49.774194 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-ovsdbserver-sb\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.774462 master-0 kubenswrapper[33013]: I0313 11:14:49.774326 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-ovsdbserver-nb\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.797884 master-0 kubenswrapper[33013]: I0313 11:14:49.797818 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bfb994cb5-6ldqw"] Mar 13 11:14:49.832352 master-0 kubenswrapper[33013]: I0313 11:14:49.830576 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 11:14:49.842982 master-0 kubenswrapper[33013]: I0313 11:14:49.842922 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 13 11:14:49.846230 master-0 kubenswrapper[33013]: I0313 11:14:49.845626 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 11:14:49.846398 master-0 kubenswrapper[33013]: I0313 11:14:49.846374 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 13 11:14:49.846454 master-0 kubenswrapper[33013]: I0313 11:14:49.846411 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 13 11:14:49.846711 master-0 kubenswrapper[33013]: I0313 11:14:49.846655 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.877126 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-dns-swift-storage-0\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.877192 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-scripts\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.877215 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-config\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.877271 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jrnj\" (UniqueName: \"kubernetes.io/projected/8f3b3913-1778-4d02-8259-2968de468f92-kube-api-access-5jrnj\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.877325 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-config\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.877344 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0f0d23cf-4102-4d89-90f1-090cb737a347-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.877387 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0f0d23cf-4102-4d89-90f1-090cb737a347-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.877423 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-dns-svc\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.877672 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-ovsdbserver-sb\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.877920 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-ovsdbserver-nb\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.878058 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.878113 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0f0d23cf-4102-4d89-90f1-090cb737a347-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.878375 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kztlj\" (UniqueName: \"kubernetes.io/projected/0f0d23cf-4102-4d89-90f1-090cb737a347-kube-api-access-kztlj\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.878536 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-ovsdbserver-sb\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.881769 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-ovsdbserver-nb\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.882397 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-config\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.882681 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-dns-swift-storage-0\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.883128 master-0 kubenswrapper[33013]: I0313 11:14:49.882684 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-dns-svc\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.902191 master-0 kubenswrapper[33013]: I0313 11:14:49.902143 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jrnj\" (UniqueName: \"kubernetes.io/projected/8f3b3913-1778-4d02-8259-2968de468f92-kube-api-access-5jrnj\") pod \"dnsmasq-dns-bfb994cb5-6ldqw\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:49.980926 master-0 kubenswrapper[33013]: I0313 11:14:49.980833 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.980926 master-0 kubenswrapper[33013]: I0313 11:14:49.980921 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0f0d23cf-4102-4d89-90f1-090cb737a347-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.981258 master-0 kubenswrapper[33013]: I0313 11:14:49.981031 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kztlj\" (UniqueName: \"kubernetes.io/projected/0f0d23cf-4102-4d89-90f1-090cb737a347-kube-api-access-kztlj\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.981258 master-0 kubenswrapper[33013]: I0313 11:14:49.981094 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-scripts\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.981258 master-0 kubenswrapper[33013]: I0313 11:14:49.981154 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-config\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.981258 master-0 kubenswrapper[33013]: I0313 11:14:49.981218 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0f0d23cf-4102-4d89-90f1-090cb737a347-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.981445 master-0 kubenswrapper[33013]: I0313 11:14:49.981269 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0f0d23cf-4102-4d89-90f1-090cb737a347-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.981942 master-0 kubenswrapper[33013]: I0313 11:14:49.981868 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0f0d23cf-4102-4d89-90f1-090cb737a347-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:49.983194 master-0 kubenswrapper[33013]: I0313 11:14:49.983088 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0f0d23cf-4102-4d89-90f1-090cb737a347-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:50.000155 master-0 kubenswrapper[33013]: I0313 11:14:49.995730 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-scripts\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:50.000155 master-0 kubenswrapper[33013]: I0313 11:14:49.998359 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0f0d23cf-4102-4d89-90f1-090cb737a347-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:50.001941 master-0 kubenswrapper[33013]: I0313 11:14:50.001867 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-config\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:50.002563 master-0 kubenswrapper[33013]: I0313 11:14:50.002518 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:50.021614 master-0 kubenswrapper[33013]: I0313 11:14:50.009556 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kztlj\" (UniqueName: \"kubernetes.io/projected/0f0d23cf-4102-4d89-90f1-090cb737a347-kube-api-access-kztlj\") pod \"ironic-inspector-0\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " pod="openstack/ironic-inspector-0" Mar 13 11:14:50.058319 master-0 kubenswrapper[33013]: E0313 11:14:50.058245 33013 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefeed3fc_b40a_4ac4_b14e_bae3a3fb1da7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68f912cc_d199_4a01_bec5_765cc17824bb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf8d6097_bf79_447e_bcea_f17f3ff4f62a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefeed3fc_b40a_4ac4_b14e_bae3a3fb1da7.slice/crio-532a529655f078afa3f8d438abd2262d2a0e33f2cd3ed32e3d4d18660f352e03\": RecentStats: unable to find data in memory cache]" Mar 13 11:14:50.078374 master-0 kubenswrapper[33013]: I0313 11:14:50.078290 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:50.192733 master-0 kubenswrapper[33013]: I0313 11:14:50.189074 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 13 11:14:50.741513 master-0 kubenswrapper[33013]: I0313 11:14:50.738889 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bfb994cb5-6ldqw"] Mar 13 11:14:51.021081 master-0 kubenswrapper[33013]: I0313 11:14:51.020985 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 11:14:51.456236 master-0 kubenswrapper[33013]: I0313 11:14:51.456174 33013 generic.go:334] "Generic (PLEG): container finished" podID="0f0d23cf-4102-4d89-90f1-090cb737a347" containerID="feb1f083ebaeffad6c6318efac6ad0a9147459bfaf81fbf6724bf82c38f2a4e4" exitCode=0 Mar 13 11:14:51.456539 master-0 kubenswrapper[33013]: I0313 11:14:51.456270 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0f0d23cf-4102-4d89-90f1-090cb737a347","Type":"ContainerDied","Data":"feb1f083ebaeffad6c6318efac6ad0a9147459bfaf81fbf6724bf82c38f2a4e4"} Mar 13 11:14:51.456539 master-0 kubenswrapper[33013]: I0313 11:14:51.456302 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0f0d23cf-4102-4d89-90f1-090cb737a347","Type":"ContainerStarted","Data":"2d25b27b42a20bb2447467cd283667e067fda2a7158ca144fbac4bcb0553ac48"} Mar 13 11:14:51.457938 master-0 kubenswrapper[33013]: I0313 11:14:51.457912 33013 generic.go:334] "Generic (PLEG): container finished" podID="8f3b3913-1778-4d02-8259-2968de468f92" containerID="8673b99b779e1fc8846502c51c269f9606cea2dec88659060ab2603e428af73b" exitCode=0 Mar 13 11:14:51.458025 master-0 kubenswrapper[33013]: I0313 11:14:51.457944 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" event={"ID":"8f3b3913-1778-4d02-8259-2968de468f92","Type":"ContainerDied","Data":"8673b99b779e1fc8846502c51c269f9606cea2dec88659060ab2603e428af73b"} Mar 13 11:14:51.458025 master-0 kubenswrapper[33013]: I0313 11:14:51.457965 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" event={"ID":"8f3b3913-1778-4d02-8259-2968de468f92","Type":"ContainerStarted","Data":"89c684586b4e7d54732495371ec877e93d265d231d2a36877fa52ffd9515356a"} Mar 13 11:14:52.483573 master-0 kubenswrapper[33013]: I0313 11:14:52.483520 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" event={"ID":"8f3b3913-1778-4d02-8259-2968de468f92","Type":"ContainerStarted","Data":"8d930d2d6a67ce74d20a000616e1e1e4c30d23c9a25b7f8a5840d0ae8062ce5e"} Mar 13 11:14:52.486674 master-0 kubenswrapper[33013]: I0313 11:14:52.483975 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:14:52.520955 master-0 kubenswrapper[33013]: I0313 11:14:52.520830 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" podStartSLOduration=3.520807479 podStartE2EDuration="3.520807479s" podCreationTimestamp="2026-03-13 11:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:14:52.512122921 +0000 UTC m=+1075.988076270" watchObservedRunningTime="2026-03-13 11:14:52.520807479 +0000 UTC m=+1075.996760828" Mar 13 11:14:52.660705 master-0 kubenswrapper[33013]: I0313 11:14:52.659561 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-7f9d77888-kwqwh" Mar 13 11:14:53.501965 master-0 kubenswrapper[33013]: I0313 11:14:53.501910 33013 generic.go:334] "Generic (PLEG): container finished" podID="e16baf7d-8440-4431-a184-523ae34f6e6f" containerID="1a86bbb947ba3f813513f8046926c7f0177dffe72dd44fe7cfa398dfb524f657" exitCode=0 Mar 13 11:14:53.502729 master-0 kubenswrapper[33013]: I0313 11:14:53.502279 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e16baf7d-8440-4431-a184-523ae34f6e6f","Type":"ContainerDied","Data":"1a86bbb947ba3f813513f8046926c7f0177dffe72dd44fe7cfa398dfb524f657"} Mar 13 11:14:53.701750 master-0 kubenswrapper[33013]: I0313 11:14:53.700547 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 11:14:54.793614 master-0 kubenswrapper[33013]: I0313 11:14:54.793366 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lh728"] Mar 13 11:14:54.797602 master-0 kubenswrapper[33013]: I0313 11:14:54.795831 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:14:54.801962 master-0 kubenswrapper[33013]: I0313 11:14:54.801915 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 13 11:14:54.818056 master-0 kubenswrapper[33013]: I0313 11:14:54.817904 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 13 11:14:54.825602 master-0 kubenswrapper[33013]: I0313 11:14:54.820615 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lh728"] Mar 13 11:14:54.985008 master-0 kubenswrapper[33013]: I0313 11:14:54.984902 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lh728\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:14:54.985423 master-0 kubenswrapper[33013]: I0313 11:14:54.985408 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-config-data\") pod \"nova-cell0-conductor-db-sync-lh728\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:14:54.985540 master-0 kubenswrapper[33013]: I0313 11:14:54.985525 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxgsk\" (UniqueName: \"kubernetes.io/projected/3a8b658f-2cd1-4d6c-806b-c234244637df-kube-api-access-xxgsk\") pod \"nova-cell0-conductor-db-sync-lh728\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:14:54.985786 master-0 kubenswrapper[33013]: I0313 11:14:54.985762 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-scripts\") pod \"nova-cell0-conductor-db-sync-lh728\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:14:55.091220 master-0 kubenswrapper[33013]: I0313 11:14:55.091132 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-config-data\") pod \"nova-cell0-conductor-db-sync-lh728\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:14:55.091220 master-0 kubenswrapper[33013]: I0313 11:14:55.091208 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxgsk\" (UniqueName: \"kubernetes.io/projected/3a8b658f-2cd1-4d6c-806b-c234244637df-kube-api-access-xxgsk\") pod \"nova-cell0-conductor-db-sync-lh728\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:14:55.091516 master-0 kubenswrapper[33013]: I0313 11:14:55.091309 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-scripts\") pod \"nova-cell0-conductor-db-sync-lh728\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:14:55.091516 master-0 kubenswrapper[33013]: I0313 11:14:55.091368 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lh728\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:14:55.107611 master-0 kubenswrapper[33013]: I0313 11:14:55.104088 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lh728\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:14:55.111624 master-0 kubenswrapper[33013]: I0313 11:14:55.108240 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-scripts\") pod \"nova-cell0-conductor-db-sync-lh728\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:14:55.111624 master-0 kubenswrapper[33013]: I0313 11:14:55.109323 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-config-data\") pod \"nova-cell0-conductor-db-sync-lh728\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:14:55.122735 master-0 kubenswrapper[33013]: I0313 11:14:55.122677 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxgsk\" (UniqueName: \"kubernetes.io/projected/3a8b658f-2cd1-4d6c-806b-c234244637df-kube-api-access-xxgsk\") pod \"nova-cell0-conductor-db-sync-lh728\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:14:55.229728 master-0 kubenswrapper[33013]: I0313 11:14:55.227729 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:14:57.522365 master-0 kubenswrapper[33013]: I0313 11:14:57.522290 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 13 11:14:57.870885 master-0 kubenswrapper[33013]: I0313 11:14:57.870812 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lh728"] Mar 13 11:14:57.873774 master-0 kubenswrapper[33013]: W0313 11:14:57.873736 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a8b658f_2cd1_4d6c_806b_c234244637df.slice/crio-869b08b8ccee9b571c50dbc71b22a9334018e6f5b0d302d8bbfaf7aaeaa9b00c WatchSource:0}: Error finding container 869b08b8ccee9b571c50dbc71b22a9334018e6f5b0d302d8bbfaf7aaeaa9b00c: Status 404 returned error can't find the container with id 869b08b8ccee9b571c50dbc71b22a9334018e6f5b0d302d8bbfaf7aaeaa9b00c Mar 13 11:14:57.876639 master-0 kubenswrapper[33013]: I0313 11:14:57.876457 33013 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 11:14:58.670934 master-0 kubenswrapper[33013]: I0313 11:14:58.670847 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0f0d23cf-4102-4d89-90f1-090cb737a347","Type":"ContainerStarted","Data":"bb4b365f8d85cee02e5dbec7da9f5b943fe38fb477bad0e9004b8c93d5c09d3e"} Mar 13 11:14:58.671620 master-0 kubenswrapper[33013]: I0313 11:14:58.671100 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="0f0d23cf-4102-4d89-90f1-090cb737a347" containerName="inspector-pxe-init" containerID="cri-o://bb4b365f8d85cee02e5dbec7da9f5b943fe38fb477bad0e9004b8c93d5c09d3e" gracePeriod=60 Mar 13 11:14:58.708624 master-0 kubenswrapper[33013]: I0313 11:14:58.701478 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e16baf7d-8440-4431-a184-523ae34f6e6f","Type":"ContainerStarted","Data":"07801e3a787aa94a9b84927c3f31b029c0890d08b2bc3bc682ebd9d49be5999d"} Mar 13 11:14:58.779055 master-0 kubenswrapper[33013]: I0313 11:14:58.778993 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lh728" event={"ID":"3a8b658f-2cd1-4d6c-806b-c234244637df","Type":"ContainerStarted","Data":"869b08b8ccee9b571c50dbc71b22a9334018e6f5b0d302d8bbfaf7aaeaa9b00c"} Mar 13 11:14:59.290679 master-0 kubenswrapper[33013]: I0313 11:14:59.290621 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 13 11:14:59.445320 master-0 kubenswrapper[33013]: I0313 11:14:59.445189 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-scripts\") pod \"0f0d23cf-4102-4d89-90f1-090cb737a347\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " Mar 13 11:14:59.446344 master-0 kubenswrapper[33013]: I0313 11:14:59.446323 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0f0d23cf-4102-4d89-90f1-090cb737a347-var-lib-ironic\") pod \"0f0d23cf-4102-4d89-90f1-090cb737a347\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " Mar 13 11:14:59.446515 master-0 kubenswrapper[33013]: I0313 11:14:59.446494 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-combined-ca-bundle\") pod \"0f0d23cf-4102-4d89-90f1-090cb737a347\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " Mar 13 11:14:59.446877 master-0 kubenswrapper[33013]: I0313 11:14:59.446858 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0f0d23cf-4102-4d89-90f1-090cb737a347-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"0f0d23cf-4102-4d89-90f1-090cb737a347\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " Mar 13 11:14:59.447075 master-0 kubenswrapper[33013]: I0313 11:14:59.447059 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kztlj\" (UniqueName: \"kubernetes.io/projected/0f0d23cf-4102-4d89-90f1-090cb737a347-kube-api-access-kztlj\") pod \"0f0d23cf-4102-4d89-90f1-090cb737a347\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " Mar 13 11:14:59.447222 master-0 kubenswrapper[33013]: I0313 11:14:59.447165 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f0d23cf-4102-4d89-90f1-090cb737a347-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "0f0d23cf-4102-4d89-90f1-090cb737a347" (UID: "0f0d23cf-4102-4d89-90f1-090cb737a347"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:14:59.447393 master-0 kubenswrapper[33013]: I0313 11:14:59.447376 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-config\") pod \"0f0d23cf-4102-4d89-90f1-090cb737a347\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " Mar 13 11:14:59.447527 master-0 kubenswrapper[33013]: I0313 11:14:59.447513 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0f0d23cf-4102-4d89-90f1-090cb737a347-etc-podinfo\") pod \"0f0d23cf-4102-4d89-90f1-090cb737a347\" (UID: \"0f0d23cf-4102-4d89-90f1-090cb737a347\") " Mar 13 11:14:59.448454 master-0 kubenswrapper[33013]: I0313 11:14:59.448434 33013 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0f0d23cf-4102-4d89-90f1-090cb737a347-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:59.450488 master-0 kubenswrapper[33013]: I0313 11:14:59.450464 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f0d23cf-4102-4d89-90f1-090cb737a347-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "0f0d23cf-4102-4d89-90f1-090cb737a347" (UID: "0f0d23cf-4102-4d89-90f1-090cb737a347"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:14:59.452651 master-0 kubenswrapper[33013]: I0313 11:14:59.452420 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-scripts" (OuterVolumeSpecName: "scripts") pod "0f0d23cf-4102-4d89-90f1-090cb737a347" (UID: "0f0d23cf-4102-4d89-90f1-090cb737a347"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:59.452651 master-0 kubenswrapper[33013]: I0313 11:14:59.452439 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f0d23cf-4102-4d89-90f1-090cb737a347-kube-api-access-kztlj" (OuterVolumeSpecName: "kube-api-access-kztlj") pod "0f0d23cf-4102-4d89-90f1-090cb737a347" (UID: "0f0d23cf-4102-4d89-90f1-090cb737a347"). InnerVolumeSpecName "kube-api-access-kztlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:14:59.453131 master-0 kubenswrapper[33013]: I0313 11:14:59.453110 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0f0d23cf-4102-4d89-90f1-090cb737a347-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "0f0d23cf-4102-4d89-90f1-090cb737a347" (UID: "0f0d23cf-4102-4d89-90f1-090cb737a347"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 13 11:14:59.456714 master-0 kubenswrapper[33013]: I0313 11:14:59.455797 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-config" (OuterVolumeSpecName: "config") pod "0f0d23cf-4102-4d89-90f1-090cb737a347" (UID: "0f0d23cf-4102-4d89-90f1-090cb737a347"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:59.514038 master-0 kubenswrapper[33013]: I0313 11:14:59.513972 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f0d23cf-4102-4d89-90f1-090cb737a347" (UID: "0f0d23cf-4102-4d89-90f1-090cb737a347"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:14:59.551439 master-0 kubenswrapper[33013]: I0313 11:14:59.551387 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:59.551439 master-0 kubenswrapper[33013]: I0313 11:14:59.551430 33013 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0f0d23cf-4102-4d89-90f1-090cb737a347-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:59.551439 master-0 kubenswrapper[33013]: I0313 11:14:59.551443 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:59.551725 master-0 kubenswrapper[33013]: I0313 11:14:59.551456 33013 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0f0d23cf-4102-4d89-90f1-090cb737a347-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:59.551725 master-0 kubenswrapper[33013]: I0313 11:14:59.551468 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f0d23cf-4102-4d89-90f1-090cb737a347-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:59.551725 master-0 kubenswrapper[33013]: I0313 11:14:59.551483 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kztlj\" (UniqueName: \"kubernetes.io/projected/0f0d23cf-4102-4d89-90f1-090cb737a347-kube-api-access-kztlj\") on node \"master-0\" DevicePath \"\"" Mar 13 11:14:59.749441 master-0 kubenswrapper[33013]: I0313 11:14:59.749263 33013 generic.go:334] "Generic (PLEG): container finished" podID="0f0d23cf-4102-4d89-90f1-090cb737a347" containerID="bb4b365f8d85cee02e5dbec7da9f5b943fe38fb477bad0e9004b8c93d5c09d3e" exitCode=0 Mar 13 11:14:59.749441 master-0 kubenswrapper[33013]: I0313 11:14:59.749314 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0f0d23cf-4102-4d89-90f1-090cb737a347","Type":"ContainerDied","Data":"bb4b365f8d85cee02e5dbec7da9f5b943fe38fb477bad0e9004b8c93d5c09d3e"} Mar 13 11:14:59.749441 master-0 kubenswrapper[33013]: I0313 11:14:59.749410 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 13 11:14:59.750136 master-0 kubenswrapper[33013]: I0313 11:14:59.749804 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0f0d23cf-4102-4d89-90f1-090cb737a347","Type":"ContainerDied","Data":"2d25b27b42a20bb2447467cd283667e067fda2a7158ca144fbac4bcb0553ac48"} Mar 13 11:14:59.750136 master-0 kubenswrapper[33013]: I0313 11:14:59.749840 33013 scope.go:117] "RemoveContainer" containerID="bb4b365f8d85cee02e5dbec7da9f5b943fe38fb477bad0e9004b8c93d5c09d3e" Mar 13 11:14:59.784365 master-0 kubenswrapper[33013]: I0313 11:14:59.784314 33013 scope.go:117] "RemoveContainer" containerID="feb1f083ebaeffad6c6318efac6ad0a9147459bfaf81fbf6724bf82c38f2a4e4" Mar 13 11:14:59.818311 master-0 kubenswrapper[33013]: I0313 11:14:59.818264 33013 scope.go:117] "RemoveContainer" containerID="bb4b365f8d85cee02e5dbec7da9f5b943fe38fb477bad0e9004b8c93d5c09d3e" Mar 13 11:14:59.862819 master-0 kubenswrapper[33013]: E0313 11:14:59.862743 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb4b365f8d85cee02e5dbec7da9f5b943fe38fb477bad0e9004b8c93d5c09d3e\": container with ID starting with bb4b365f8d85cee02e5dbec7da9f5b943fe38fb477bad0e9004b8c93d5c09d3e not found: ID does not exist" containerID="bb4b365f8d85cee02e5dbec7da9f5b943fe38fb477bad0e9004b8c93d5c09d3e" Mar 13 11:14:59.863120 master-0 kubenswrapper[33013]: I0313 11:14:59.862825 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb4b365f8d85cee02e5dbec7da9f5b943fe38fb477bad0e9004b8c93d5c09d3e"} err="failed to get container status \"bb4b365f8d85cee02e5dbec7da9f5b943fe38fb477bad0e9004b8c93d5c09d3e\": rpc error: code = NotFound desc = could not find container \"bb4b365f8d85cee02e5dbec7da9f5b943fe38fb477bad0e9004b8c93d5c09d3e\": container with ID starting with bb4b365f8d85cee02e5dbec7da9f5b943fe38fb477bad0e9004b8c93d5c09d3e not found: ID does not exist" Mar 13 11:14:59.863120 master-0 kubenswrapper[33013]: I0313 11:14:59.862873 33013 scope.go:117] "RemoveContainer" containerID="feb1f083ebaeffad6c6318efac6ad0a9147459bfaf81fbf6724bf82c38f2a4e4" Mar 13 11:14:59.863990 master-0 kubenswrapper[33013]: E0313 11:14:59.863645 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb1f083ebaeffad6c6318efac6ad0a9147459bfaf81fbf6724bf82c38f2a4e4\": container with ID starting with feb1f083ebaeffad6c6318efac6ad0a9147459bfaf81fbf6724bf82c38f2a4e4 not found: ID does not exist" containerID="feb1f083ebaeffad6c6318efac6ad0a9147459bfaf81fbf6724bf82c38f2a4e4" Mar 13 11:14:59.863990 master-0 kubenswrapper[33013]: I0313 11:14:59.863673 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb1f083ebaeffad6c6318efac6ad0a9147459bfaf81fbf6724bf82c38f2a4e4"} err="failed to get container status \"feb1f083ebaeffad6c6318efac6ad0a9147459bfaf81fbf6724bf82c38f2a4e4\": rpc error: code = NotFound desc = could not find container \"feb1f083ebaeffad6c6318efac6ad0a9147459bfaf81fbf6724bf82c38f2a4e4\": container with ID starting with feb1f083ebaeffad6c6318efac6ad0a9147459bfaf81fbf6724bf82c38f2a4e4 not found: ID does not exist" Mar 13 11:14:59.896381 master-0 kubenswrapper[33013]: I0313 11:14:59.888681 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 11:14:59.914988 master-0 kubenswrapper[33013]: I0313 11:14:59.914879 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 11:14:59.999886 master-0 kubenswrapper[33013]: I0313 11:14:59.999513 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 11:15:00.001183 master-0 kubenswrapper[33013]: E0313 11:15:00.000977 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f0d23cf-4102-4d89-90f1-090cb737a347" containerName="ironic-python-agent-init" Mar 13 11:15:00.001183 master-0 kubenswrapper[33013]: I0313 11:15:00.001016 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f0d23cf-4102-4d89-90f1-090cb737a347" containerName="ironic-python-agent-init" Mar 13 11:15:00.001183 master-0 kubenswrapper[33013]: E0313 11:15:00.001088 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f0d23cf-4102-4d89-90f1-090cb737a347" containerName="inspector-pxe-init" Mar 13 11:15:00.001183 master-0 kubenswrapper[33013]: I0313 11:15:00.001101 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f0d23cf-4102-4d89-90f1-090cb737a347" containerName="inspector-pxe-init" Mar 13 11:15:00.007536 master-0 kubenswrapper[33013]: I0313 11:15:00.003321 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f0d23cf-4102-4d89-90f1-090cb737a347" containerName="inspector-pxe-init" Mar 13 11:15:00.031633 master-0 kubenswrapper[33013]: I0313 11:15:00.022803 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 13 11:15:00.031633 master-0 kubenswrapper[33013]: I0313 11:15:00.030528 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 13 11:15:00.031633 master-0 kubenswrapper[33013]: I0313 11:15:00.030974 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Mar 13 11:15:00.031633 master-0 kubenswrapper[33013]: I0313 11:15:00.031295 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Mar 13 11:15:00.036291 master-0 kubenswrapper[33013]: I0313 11:15:00.036248 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 13 11:15:00.036657 master-0 kubenswrapper[33013]: I0313 11:15:00.036553 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 11:15:00.043897 master-0 kubenswrapper[33013]: I0313 11:15:00.043816 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 13 11:15:00.081619 master-0 kubenswrapper[33013]: I0313 11:15:00.080418 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:15:00.085611 master-0 kubenswrapper[33013]: I0313 11:15:00.082573 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md4gx\" (UniqueName: \"kubernetes.io/projected/08192f8e-122f-495a-a184-094f712ea9a6-kube-api-access-md4gx\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.085611 master-0 kubenswrapper[33013]: I0313 11:15:00.082703 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.085611 master-0 kubenswrapper[33013]: I0313 11:15:00.082752 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/08192f8e-122f-495a-a184-094f712ea9a6-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.085611 master-0 kubenswrapper[33013]: I0313 11:15:00.082789 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-config\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.085611 master-0 kubenswrapper[33013]: I0313 11:15:00.082819 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.085611 master-0 kubenswrapper[33013]: I0313 11:15:00.082848 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/08192f8e-122f-495a-a184-094f712ea9a6-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.085611 master-0 kubenswrapper[33013]: I0313 11:15:00.083360 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.085611 master-0 kubenswrapper[33013]: I0313 11:15:00.083451 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/08192f8e-122f-495a-a184-094f712ea9a6-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.085611 master-0 kubenswrapper[33013]: I0313 11:15:00.083481 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-scripts\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.185399 master-0 kubenswrapper[33013]: I0313 11:15:00.185296 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.185815 master-0 kubenswrapper[33013]: I0313 11:15:00.185782 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/08192f8e-122f-495a-a184-094f712ea9a6-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.186058 master-0 kubenswrapper[33013]: I0313 11:15:00.186017 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.186160 master-0 kubenswrapper[33013]: I0313 11:15:00.186077 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/08192f8e-122f-495a-a184-094f712ea9a6-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.186160 master-0 kubenswrapper[33013]: I0313 11:15:00.186125 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-scripts\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.186240 master-0 kubenswrapper[33013]: I0313 11:15:00.186173 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md4gx\" (UniqueName: \"kubernetes.io/projected/08192f8e-122f-495a-a184-094f712ea9a6-kube-api-access-md4gx\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.191944 master-0 kubenswrapper[33013]: I0313 11:15:00.186501 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.191944 master-0 kubenswrapper[33013]: I0313 11:15:00.186580 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/08192f8e-122f-495a-a184-094f712ea9a6-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.191944 master-0 kubenswrapper[33013]: I0313 11:15:00.186682 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-config\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.191944 master-0 kubenswrapper[33013]: I0313 11:15:00.188386 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/08192f8e-122f-495a-a184-094f712ea9a6-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.191944 master-0 kubenswrapper[33013]: I0313 11:15:00.190508 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/08192f8e-122f-495a-a184-094f712ea9a6-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.199632 master-0 kubenswrapper[33013]: I0313 11:15:00.197346 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b494447c-sl7nf"] Mar 13 11:15:00.199632 master-0 kubenswrapper[33013]: I0313 11:15:00.197683 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b494447c-sl7nf" podUID="69fa6a94-1b94-44ad-b7b3-5294d3f76e57" containerName="dnsmasq-dns" containerID="cri-o://c87ada79b3b2ed28b4ccbec15f51a2270514a36795fad9a27b65f22df6634622" gracePeriod=10 Mar 13 11:15:00.220504 master-0 kubenswrapper[33013]: I0313 11:15:00.219073 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.220504 master-0 kubenswrapper[33013]: I0313 11:15:00.219758 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/08192f8e-122f-495a-a184-094f712ea9a6-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.240632 master-0 kubenswrapper[33013]: I0313 11:15:00.236630 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.240632 master-0 kubenswrapper[33013]: I0313 11:15:00.240399 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-config\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.243840 master-0 kubenswrapper[33013]: I0313 11:15:00.242476 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md4gx\" (UniqueName: \"kubernetes.io/projected/08192f8e-122f-495a-a184-094f712ea9a6-kube-api-access-md4gx\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.243840 master-0 kubenswrapper[33013]: I0313 11:15:00.242723 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-scripts\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.256692 master-0 kubenswrapper[33013]: I0313 11:15:00.249847 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08192f8e-122f-495a-a184-094f712ea9a6-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"08192f8e-122f-495a-a184-094f712ea9a6\") " pod="openstack/ironic-inspector-0" Mar 13 11:15:00.363459 master-0 kubenswrapper[33013]: I0313 11:15:00.363381 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 13 11:15:00.779765 master-0 kubenswrapper[33013]: I0313 11:15:00.768627 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f0d23cf-4102-4d89-90f1-090cb737a347" path="/var/lib/kubelet/pods/0f0d23cf-4102-4d89-90f1-090cb737a347/volumes" Mar 13 11:15:00.865123 master-0 kubenswrapper[33013]: I0313 11:15:00.864994 33013 generic.go:334] "Generic (PLEG): container finished" podID="69fa6a94-1b94-44ad-b7b3-5294d3f76e57" containerID="c87ada79b3b2ed28b4ccbec15f51a2270514a36795fad9a27b65f22df6634622" exitCode=0 Mar 13 11:15:00.865123 master-0 kubenswrapper[33013]: I0313 11:15:00.865065 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b494447c-sl7nf" event={"ID":"69fa6a94-1b94-44ad-b7b3-5294d3f76e57","Type":"ContainerDied","Data":"c87ada79b3b2ed28b4ccbec15f51a2270514a36795fad9a27b65f22df6634622"} Mar 13 11:15:00.867967 master-0 kubenswrapper[33013]: E0313 11:15:00.867879 33013 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69fa6a94_1b94_44ad_b7b3_5294d3f76e57.slice/crio-c87ada79b3b2ed28b4ccbec15f51a2270514a36795fad9a27b65f22df6634622.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69fa6a94_1b94_44ad_b7b3_5294d3f76e57.slice/crio-conmon-c87ada79b3b2ed28b4ccbec15f51a2270514a36795fad9a27b65f22df6634622.scope\": RecentStats: unable to find data in memory cache]" Mar 13 11:15:01.124105 master-0 kubenswrapper[33013]: I0313 11:15:01.123248 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:15:01.132191 master-0 kubenswrapper[33013]: I0313 11:15:01.131940 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-ovsdbserver-sb\") pod \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " Mar 13 11:15:01.132191 master-0 kubenswrapper[33013]: I0313 11:15:01.132155 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-ovsdbserver-nb\") pod \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " Mar 13 11:15:01.132385 master-0 kubenswrapper[33013]: I0313 11:15:01.132205 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-dns-svc\") pod \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " Mar 13 11:15:01.132385 master-0 kubenswrapper[33013]: I0313 11:15:01.132239 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-config\") pod \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " Mar 13 11:15:01.133290 master-0 kubenswrapper[33013]: I0313 11:15:01.132436 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpc86\" (UniqueName: \"kubernetes.io/projected/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-kube-api-access-dpc86\") pod \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " Mar 13 11:15:01.133290 master-0 kubenswrapper[33013]: I0313 11:15:01.132531 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-dns-swift-storage-0\") pod \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\" (UID: \"69fa6a94-1b94-44ad-b7b3-5294d3f76e57\") " Mar 13 11:15:01.193626 master-0 kubenswrapper[33013]: I0313 11:15:01.189364 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-kube-api-access-dpc86" (OuterVolumeSpecName: "kube-api-access-dpc86") pod "69fa6a94-1b94-44ad-b7b3-5294d3f76e57" (UID: "69fa6a94-1b94-44ad-b7b3-5294d3f76e57"). InnerVolumeSpecName "kube-api-access-dpc86". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:15:01.241286 master-0 kubenswrapper[33013]: I0313 11:15:01.241203 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "69fa6a94-1b94-44ad-b7b3-5294d3f76e57" (UID: "69fa6a94-1b94-44ad-b7b3-5294d3f76e57"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:15:01.249692 master-0 kubenswrapper[33013]: I0313 11:15:01.246923 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpc86\" (UniqueName: \"kubernetes.io/projected/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-kube-api-access-dpc86\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:01.249692 master-0 kubenswrapper[33013]: I0313 11:15:01.246970 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:01.261718 master-0 kubenswrapper[33013]: I0313 11:15:01.261286 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "69fa6a94-1b94-44ad-b7b3-5294d3f76e57" (UID: "69fa6a94-1b94-44ad-b7b3-5294d3f76e57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:15:01.262255 master-0 kubenswrapper[33013]: I0313 11:15:01.262055 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "69fa6a94-1b94-44ad-b7b3-5294d3f76e57" (UID: "69fa6a94-1b94-44ad-b7b3-5294d3f76e57"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:15:01.289936 master-0 kubenswrapper[33013]: I0313 11:15:01.289857 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-config" (OuterVolumeSpecName: "config") pod "69fa6a94-1b94-44ad-b7b3-5294d3f76e57" (UID: "69fa6a94-1b94-44ad-b7b3-5294d3f76e57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:15:01.344276 master-0 kubenswrapper[33013]: I0313 11:15:01.344223 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "69fa6a94-1b94-44ad-b7b3-5294d3f76e57" (UID: "69fa6a94-1b94-44ad-b7b3-5294d3f76e57"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:15:01.357909 master-0 kubenswrapper[33013]: W0313 11:15:01.355405 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08192f8e_122f_495a_a184_094f712ea9a6.slice/crio-61c8e1648e928cc11b2f9b2bcab2ceda0f0dffa2bc470fb0c90c9cfe990f7978 WatchSource:0}: Error finding container 61c8e1648e928cc11b2f9b2bcab2ceda0f0dffa2bc470fb0c90c9cfe990f7978: Status 404 returned error can't find the container with id 61c8e1648e928cc11b2f9b2bcab2ceda0f0dffa2bc470fb0c90c9cfe990f7978 Mar 13 11:15:01.358674 master-0 kubenswrapper[33013]: I0313 11:15:01.358190 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:01.358674 master-0 kubenswrapper[33013]: I0313 11:15:01.358234 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:01.358674 master-0 kubenswrapper[33013]: I0313 11:15:01.358243 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:01.358674 master-0 kubenswrapper[33013]: I0313 11:15:01.358252 33013 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69fa6a94-1b94-44ad-b7b3-5294d3f76e57-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:01.367470 master-0 kubenswrapper[33013]: I0313 11:15:01.367410 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 13 11:15:01.886657 master-0 kubenswrapper[33013]: I0313 11:15:01.884845 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b494447c-sl7nf" event={"ID":"69fa6a94-1b94-44ad-b7b3-5294d3f76e57","Type":"ContainerDied","Data":"d2a0ad8e58a99916b42c31acc96d2df58336aa5cac6ad275fb3de671d7e86b46"} Mar 13 11:15:01.886657 master-0 kubenswrapper[33013]: I0313 11:15:01.884895 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b494447c-sl7nf" Mar 13 11:15:01.886657 master-0 kubenswrapper[33013]: I0313 11:15:01.884926 33013 scope.go:117] "RemoveContainer" containerID="c87ada79b3b2ed28b4ccbec15f51a2270514a36795fad9a27b65f22df6634622" Mar 13 11:15:01.890612 master-0 kubenswrapper[33013]: I0313 11:15:01.888498 33013 generic.go:334] "Generic (PLEG): container finished" podID="08192f8e-122f-495a-a184-094f712ea9a6" containerID="8ce4a6d20dbcc6ef96fe6a1e362ccc6a9ba5a5623999459d5a980eb055f01a55" exitCode=0 Mar 13 11:15:01.890612 master-0 kubenswrapper[33013]: I0313 11:15:01.888554 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"08192f8e-122f-495a-a184-094f712ea9a6","Type":"ContainerDied","Data":"8ce4a6d20dbcc6ef96fe6a1e362ccc6a9ba5a5623999459d5a980eb055f01a55"} Mar 13 11:15:01.890612 master-0 kubenswrapper[33013]: I0313 11:15:01.888605 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"08192f8e-122f-495a-a184-094f712ea9a6","Type":"ContainerStarted","Data":"61c8e1648e928cc11b2f9b2bcab2ceda0f0dffa2bc470fb0c90c9cfe990f7978"} Mar 13 11:15:01.917674 master-0 kubenswrapper[33013]: I0313 11:15:01.917541 33013 scope.go:117] "RemoveContainer" containerID="b6ce79bb0e7c0d40ddb6c669b378f19e75703411b1813afa0f48402ba562c62a" Mar 13 11:15:02.060903 master-0 kubenswrapper[33013]: I0313 11:15:02.060819 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b494447c-sl7nf"] Mar 13 11:15:02.077899 master-0 kubenswrapper[33013]: I0313 11:15:02.074444 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b494447c-sl7nf"] Mar 13 11:15:02.735118 master-0 kubenswrapper[33013]: I0313 11:15:02.732480 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69fa6a94-1b94-44ad-b7b3-5294d3f76e57" path="/var/lib/kubelet/pods/69fa6a94-1b94-44ad-b7b3-5294d3f76e57/volumes" Mar 13 11:15:02.906129 master-0 kubenswrapper[33013]: I0313 11:15:02.905975 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"08192f8e-122f-495a-a184-094f712ea9a6","Type":"ContainerStarted","Data":"f5d13135955ca7c5ccfbe71cbaf4b2d303bc3a03e2a78e742c5e360ad8b0b33b"} Mar 13 11:15:03.927621 master-0 kubenswrapper[33013]: I0313 11:15:03.926455 33013 generic.go:334] "Generic (PLEG): container finished" podID="08192f8e-122f-495a-a184-094f712ea9a6" containerID="f5d13135955ca7c5ccfbe71cbaf4b2d303bc3a03e2a78e742c5e360ad8b0b33b" exitCode=0 Mar 13 11:15:03.927621 master-0 kubenswrapper[33013]: I0313 11:15:03.926529 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"08192f8e-122f-495a-a184-094f712ea9a6","Type":"ContainerDied","Data":"f5d13135955ca7c5ccfbe71cbaf4b2d303bc3a03e2a78e742c5e360ad8b0b33b"} Mar 13 11:15:10.036731 master-0 kubenswrapper[33013]: I0313 11:15:10.035638 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lh728" event={"ID":"3a8b658f-2cd1-4d6c-806b-c234244637df","Type":"ContainerStarted","Data":"551d650fdc705fc124b5738c8f8954e4724e776439863bc83cee2c1e4d71ece3"} Mar 13 11:15:10.390050 master-0 kubenswrapper[33013]: I0313 11:15:10.389917 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-lh728" podStartSLOduration=4.627753363 podStartE2EDuration="16.389889825s" podCreationTimestamp="2026-03-13 11:14:54 +0000 UTC" firstStartedPulling="2026-03-13 11:14:57.876376863 +0000 UTC m=+1081.352330212" lastFinishedPulling="2026-03-13 11:15:09.638513325 +0000 UTC m=+1093.114466674" observedRunningTime="2026-03-13 11:15:10.378415366 +0000 UTC m=+1093.854368725" watchObservedRunningTime="2026-03-13 11:15:10.389889825 +0000 UTC m=+1093.865843184" Mar 13 11:15:11.048617 master-0 kubenswrapper[33013]: I0313 11:15:11.046061 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:15:11.048617 master-0 kubenswrapper[33013]: I0313 11:15:11.046372 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-87aa4-default-external-api-0" podUID="0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" containerName="glance-log" containerID="cri-o://d717f42eba136a46bb6903f61408a7fde2c0f5af3f7453c8b44fc0dcd8c3c84b" gracePeriod=30 Mar 13 11:15:11.048617 master-0 kubenswrapper[33013]: I0313 11:15:11.047153 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-87aa4-default-external-api-0" podUID="0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" containerName="glance-httpd" containerID="cri-o://6486896c62de4a47dd42423a97d3f7223290593fa0980868a9d696c5afd1f3ff" gracePeriod=30 Mar 13 11:15:11.168719 master-0 kubenswrapper[33013]: I0313 11:15:11.156664 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"08192f8e-122f-495a-a184-094f712ea9a6","Type":"ContainerStarted","Data":"4c74a49da97bd717470f1fb4cfddfd1bbaae3b4d52ba871567ce053bb94b5cd1"} Mar 13 11:15:11.168719 master-0 kubenswrapper[33013]: I0313 11:15:11.156747 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"08192f8e-122f-495a-a184-094f712ea9a6","Type":"ContainerStarted","Data":"5a89118e335cba7b609a4f671250c96b6e12ade941424ea03a00a4291cc77aec"} Mar 13 11:15:12.177619 master-0 kubenswrapper[33013]: I0313 11:15:12.168971 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:15:12.177619 master-0 kubenswrapper[33013]: I0313 11:15:12.169289 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-87aa4-default-internal-api-0" podUID="bed3cf98-6b1c-4fb8-b082-57025157fab4" containerName="glance-log" containerID="cri-o://b126e8ed7d274571203637c395cc9ebdcfd10117bd1544521b82b098c7be60dd" gracePeriod=30 Mar 13 11:15:12.177619 master-0 kubenswrapper[33013]: I0313 11:15:12.169842 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-87aa4-default-internal-api-0" podUID="bed3cf98-6b1c-4fb8-b082-57025157fab4" containerName="glance-httpd" containerID="cri-o://d776249bd2a7755a857f4fd2cfc9af209747a181aa8ecdb3e2e90a7ed20d0de8" gracePeriod=30 Mar 13 11:15:12.197959 master-0 kubenswrapper[33013]: I0313 11:15:12.197174 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"08192f8e-122f-495a-a184-094f712ea9a6","Type":"ContainerStarted","Data":"f05cc69ac10e68f52ca17deb40404a6994beea4c3119de7a243b5f58c865cd44"} Mar 13 11:15:12.197959 master-0 kubenswrapper[33013]: I0313 11:15:12.197229 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"08192f8e-122f-495a-a184-094f712ea9a6","Type":"ContainerStarted","Data":"7f6ef91e7447a5cbe7a76b402b5ba7fe91702fdaa989f48c826d1ee2b1fbd838"} Mar 13 11:15:12.201519 master-0 kubenswrapper[33013]: I0313 11:15:12.201460 33013 generic.go:334] "Generic (PLEG): container finished" podID="0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" containerID="d717f42eba136a46bb6903f61408a7fde2c0f5af3f7453c8b44fc0dcd8c3c84b" exitCode=143 Mar 13 11:15:12.201714 master-0 kubenswrapper[33013]: I0313 11:15:12.201522 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-external-api-0" event={"ID":"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35","Type":"ContainerDied","Data":"d717f42eba136a46bb6903f61408a7fde2c0f5af3f7453c8b44fc0dcd8c3c84b"} Mar 13 11:15:13.216758 master-0 kubenswrapper[33013]: I0313 11:15:13.216684 33013 generic.go:334] "Generic (PLEG): container finished" podID="bed3cf98-6b1c-4fb8-b082-57025157fab4" containerID="b126e8ed7d274571203637c395cc9ebdcfd10117bd1544521b82b098c7be60dd" exitCode=143 Mar 13 11:15:13.217493 master-0 kubenswrapper[33013]: I0313 11:15:13.216780 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"bed3cf98-6b1c-4fb8-b082-57025157fab4","Type":"ContainerDied","Data":"b126e8ed7d274571203637c395cc9ebdcfd10117bd1544521b82b098c7be60dd"} Mar 13 11:15:13.221229 master-0 kubenswrapper[33013]: I0313 11:15:13.221179 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"08192f8e-122f-495a-a184-094f712ea9a6","Type":"ContainerStarted","Data":"5a36b6343f711eb1bc8c647e514780c34a515f1789768061516790b9308e7b97"} Mar 13 11:15:13.221505 master-0 kubenswrapper[33013]: I0313 11:15:13.221434 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 13 11:15:13.221505 master-0 kubenswrapper[33013]: I0313 11:15:13.221488 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 13 11:15:13.333385 master-0 kubenswrapper[33013]: I0313 11:15:13.333289 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=14.333261766 podStartE2EDuration="14.333261766s" podCreationTimestamp="2026-03-13 11:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:15:13.324133775 +0000 UTC m=+1096.800087124" watchObservedRunningTime="2026-03-13 11:15:13.333261766 +0000 UTC m=+1096.809215115" Mar 13 11:15:15.069272 master-0 kubenswrapper[33013]: I0313 11:15:15.069212 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.124746 master-0 kubenswrapper[33013]: I0313 11:15:15.124671 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-combined-ca-bundle\") pod \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " Mar 13 11:15:15.125031 master-0 kubenswrapper[33013]: I0313 11:15:15.124978 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " Mar 13 11:15:15.125127 master-0 kubenswrapper[33013]: I0313 11:15:15.125044 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-scripts\") pod \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " Mar 13 11:15:15.125127 master-0 kubenswrapper[33013]: I0313 11:15:15.125074 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-logs\") pod \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " Mar 13 11:15:15.125200 master-0 kubenswrapper[33013]: I0313 11:15:15.125163 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnb7f\" (UniqueName: \"kubernetes.io/projected/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-kube-api-access-hnb7f\") pod \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " Mar 13 11:15:15.125282 master-0 kubenswrapper[33013]: I0313 11:15:15.125252 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-httpd-run\") pod \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " Mar 13 11:15:15.125383 master-0 kubenswrapper[33013]: I0313 11:15:15.125357 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-public-tls-certs\") pod \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " Mar 13 11:15:15.125426 master-0 kubenswrapper[33013]: I0313 11:15:15.125392 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-config-data\") pod \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\" (UID: \"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35\") " Mar 13 11:15:15.147387 master-0 kubenswrapper[33013]: I0313 11:15:15.147316 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-scripts" (OuterVolumeSpecName: "scripts") pod "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" (UID: "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:15.147663 master-0 kubenswrapper[33013]: I0313 11:15:15.147525 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-logs" (OuterVolumeSpecName: "logs") pod "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" (UID: "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:15:15.147936 master-0 kubenswrapper[33013]: I0313 11:15:15.147865 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" (UID: "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:15:15.154824 master-0 kubenswrapper[33013]: I0313 11:15:15.154756 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-kube-api-access-hnb7f" (OuterVolumeSpecName: "kube-api-access-hnb7f") pod "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" (UID: "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35"). InnerVolumeSpecName "kube-api-access-hnb7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:15:15.179177 master-0 kubenswrapper[33013]: I0313 11:15:15.179140 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78" (OuterVolumeSpecName: "glance") pod "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" (UID: "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35"). InnerVolumeSpecName "pvc-4701fe27-d49b-425e-b633-bef2656c1d02". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 11:15:15.184630 master-0 kubenswrapper[33013]: I0313 11:15:15.184551 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" (UID: "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:15.217461 master-0 kubenswrapper[33013]: I0313 11:15:15.213285 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-config-data" (OuterVolumeSpecName: "config-data") pod "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" (UID: "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:15.228939 master-0 kubenswrapper[33013]: I0313 11:15:15.228866 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:15.228939 master-0 kubenswrapper[33013]: I0313 11:15:15.228939 33013 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") on node \"master-0\" " Mar 13 11:15:15.229455 master-0 kubenswrapper[33013]: I0313 11:15:15.228967 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:15.229455 master-0 kubenswrapper[33013]: I0313 11:15:15.228978 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:15.229455 master-0 kubenswrapper[33013]: I0313 11:15:15.228987 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnb7f\" (UniqueName: \"kubernetes.io/projected/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-kube-api-access-hnb7f\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:15.229455 master-0 kubenswrapper[33013]: I0313 11:15:15.228998 33013 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:15.229455 master-0 kubenswrapper[33013]: I0313 11:15:15.229006 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:15.230476 master-0 kubenswrapper[33013]: I0313 11:15:15.230422 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" (UID: "0ecfb801-ccd8-478c-83e0-d5c4b3cacc35"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:15.251287 master-0 kubenswrapper[33013]: I0313 11:15:15.251233 33013 generic.go:334] "Generic (PLEG): container finished" podID="0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" containerID="6486896c62de4a47dd42423a97d3f7223290593fa0980868a9d696c5afd1f3ff" exitCode=0 Mar 13 11:15:15.251287 master-0 kubenswrapper[33013]: I0313 11:15:15.251282 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-external-api-0" event={"ID":"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35","Type":"ContainerDied","Data":"6486896c62de4a47dd42423a97d3f7223290593fa0980868a9d696c5afd1f3ff"} Mar 13 11:15:15.251679 master-0 kubenswrapper[33013]: I0313 11:15:15.251311 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-external-api-0" event={"ID":"0ecfb801-ccd8-478c-83e0-d5c4b3cacc35","Type":"ContainerDied","Data":"7a7d3485699b3a1c2dc0617cd985bdbbdbc655b9f386d4d1004429d1bab7b5a2"} Mar 13 11:15:15.251679 master-0 kubenswrapper[33013]: I0313 11:15:15.251337 33013 scope.go:117] "RemoveContainer" containerID="6486896c62de4a47dd42423a97d3f7223290593fa0980868a9d696c5afd1f3ff" Mar 13 11:15:15.251679 master-0 kubenswrapper[33013]: I0313 11:15:15.251469 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.265854 master-0 kubenswrapper[33013]: I0313 11:15:15.265635 33013 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 13 11:15:15.265854 master-0 kubenswrapper[33013]: I0313 11:15:15.265825 33013 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4701fe27-d49b-425e-b633-bef2656c1d02" (UniqueName: "kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78") on node "master-0" Mar 13 11:15:15.301165 master-0 kubenswrapper[33013]: I0313 11:15:15.301109 33013 scope.go:117] "RemoveContainer" containerID="d717f42eba136a46bb6903f61408a7fde2c0f5af3f7453c8b44fc0dcd8c3c84b" Mar 13 11:15:15.339167 master-0 kubenswrapper[33013]: I0313 11:15:15.339100 33013 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:15.342803 master-0 kubenswrapper[33013]: I0313 11:15:15.342753 33013 reconciler_common.go:293] "Volume detached for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:15.367612 master-0 kubenswrapper[33013]: I0313 11:15:15.367536 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:15:15.369791 master-0 kubenswrapper[33013]: I0313 11:15:15.369751 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 13 11:15:15.370128 master-0 kubenswrapper[33013]: I0313 11:15:15.369926 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 13 11:15:15.386644 master-0 kubenswrapper[33013]: I0313 11:15:15.386614 33013 scope.go:117] "RemoveContainer" containerID="6486896c62de4a47dd42423a97d3f7223290593fa0980868a9d696c5afd1f3ff" Mar 13 11:15:15.387282 master-0 kubenswrapper[33013]: E0313 11:15:15.387252 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6486896c62de4a47dd42423a97d3f7223290593fa0980868a9d696c5afd1f3ff\": container with ID starting with 6486896c62de4a47dd42423a97d3f7223290593fa0980868a9d696c5afd1f3ff not found: ID does not exist" containerID="6486896c62de4a47dd42423a97d3f7223290593fa0980868a9d696c5afd1f3ff" Mar 13 11:15:15.387419 master-0 kubenswrapper[33013]: I0313 11:15:15.387385 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6486896c62de4a47dd42423a97d3f7223290593fa0980868a9d696c5afd1f3ff"} err="failed to get container status \"6486896c62de4a47dd42423a97d3f7223290593fa0980868a9d696c5afd1f3ff\": rpc error: code = NotFound desc = could not find container \"6486896c62de4a47dd42423a97d3f7223290593fa0980868a9d696c5afd1f3ff\": container with ID starting with 6486896c62de4a47dd42423a97d3f7223290593fa0980868a9d696c5afd1f3ff not found: ID does not exist" Mar 13 11:15:15.387516 master-0 kubenswrapper[33013]: I0313 11:15:15.387502 33013 scope.go:117] "RemoveContainer" containerID="d717f42eba136a46bb6903f61408a7fde2c0f5af3f7453c8b44fc0dcd8c3c84b" Mar 13 11:15:15.388290 master-0 kubenswrapper[33013]: E0313 11:15:15.388265 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d717f42eba136a46bb6903f61408a7fde2c0f5af3f7453c8b44fc0dcd8c3c84b\": container with ID starting with d717f42eba136a46bb6903f61408a7fde2c0f5af3f7453c8b44fc0dcd8c3c84b not found: ID does not exist" containerID="d717f42eba136a46bb6903f61408a7fde2c0f5af3f7453c8b44fc0dcd8c3c84b" Mar 13 11:15:15.388420 master-0 kubenswrapper[33013]: I0313 11:15:15.388393 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d717f42eba136a46bb6903f61408a7fde2c0f5af3f7453c8b44fc0dcd8c3c84b"} err="failed to get container status \"d717f42eba136a46bb6903f61408a7fde2c0f5af3f7453c8b44fc0dcd8c3c84b\": rpc error: code = NotFound desc = could not find container \"d717f42eba136a46bb6903f61408a7fde2c0f5af3f7453c8b44fc0dcd8c3c84b\": container with ID starting with d717f42eba136a46bb6903f61408a7fde2c0f5af3f7453c8b44fc0dcd8c3c84b not found: ID does not exist" Mar 13 11:15:15.407174 master-0 kubenswrapper[33013]: I0313 11:15:15.407089 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:15:15.421108 master-0 kubenswrapper[33013]: I0313 11:15:15.420556 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:15:15.421320 master-0 kubenswrapper[33013]: E0313 11:15:15.421197 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69fa6a94-1b94-44ad-b7b3-5294d3f76e57" containerName="init" Mar 13 11:15:15.421320 master-0 kubenswrapper[33013]: I0313 11:15:15.421214 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="69fa6a94-1b94-44ad-b7b3-5294d3f76e57" containerName="init" Mar 13 11:15:15.421320 master-0 kubenswrapper[33013]: E0313 11:15:15.421251 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" containerName="glance-log" Mar 13 11:15:15.421320 master-0 kubenswrapper[33013]: I0313 11:15:15.421257 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" containerName="glance-log" Mar 13 11:15:15.421320 master-0 kubenswrapper[33013]: E0313 11:15:15.421286 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69fa6a94-1b94-44ad-b7b3-5294d3f76e57" containerName="dnsmasq-dns" Mar 13 11:15:15.421320 master-0 kubenswrapper[33013]: I0313 11:15:15.421293 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="69fa6a94-1b94-44ad-b7b3-5294d3f76e57" containerName="dnsmasq-dns" Mar 13 11:15:15.421858 master-0 kubenswrapper[33013]: E0313 11:15:15.421332 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" containerName="glance-httpd" Mar 13 11:15:15.421858 master-0 kubenswrapper[33013]: I0313 11:15:15.421339 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" containerName="glance-httpd" Mar 13 11:15:15.421858 master-0 kubenswrapper[33013]: I0313 11:15:15.421802 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" containerName="glance-httpd" Mar 13 11:15:15.421858 master-0 kubenswrapper[33013]: I0313 11:15:15.421856 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="69fa6a94-1b94-44ad-b7b3-5294d3f76e57" containerName="dnsmasq-dns" Mar 13 11:15:15.422012 master-0 kubenswrapper[33013]: I0313 11:15:15.421890 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" containerName="glance-log" Mar 13 11:15:15.423319 master-0 kubenswrapper[33013]: I0313 11:15:15.423275 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.432780 master-0 kubenswrapper[33013]: I0313 11:15:15.432706 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:15:15.435639 master-0 kubenswrapper[33013]: I0313 11:15:15.435440 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-87aa4-default-external-config-data" Mar 13 11:15:15.435639 master-0 kubenswrapper[33013]: I0313 11:15:15.435468 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 13 11:15:15.435762 master-0 kubenswrapper[33013]: I0313 11:15:15.435658 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 13 11:15:15.551600 master-0 kubenswrapper[33013]: I0313 11:15:15.551189 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.551600 master-0 kubenswrapper[33013]: I0313 11:15:15.551284 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ea3a69-e811-41fd-af69-2561dea4762a-config-data\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.551600 master-0 kubenswrapper[33013]: I0313 11:15:15.551339 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73ea3a69-e811-41fd-af69-2561dea4762a-logs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.551600 master-0 kubenswrapper[33013]: I0313 11:15:15.551369 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73ea3a69-e811-41fd-af69-2561dea4762a-scripts\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.551600 master-0 kubenswrapper[33013]: I0313 11:15:15.551391 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73ea3a69-e811-41fd-af69-2561dea4762a-httpd-run\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.551600 master-0 kubenswrapper[33013]: I0313 11:15:15.551412 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ea3a69-e811-41fd-af69-2561dea4762a-public-tls-certs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.551600 master-0 kubenswrapper[33013]: I0313 11:15:15.551442 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ea3a69-e811-41fd-af69-2561dea4762a-combined-ca-bundle\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.551600 master-0 kubenswrapper[33013]: I0313 11:15:15.551504 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z82m\" (UniqueName: \"kubernetes.io/projected/73ea3a69-e811-41fd-af69-2561dea4762a-kube-api-access-8z82m\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.653783 master-0 kubenswrapper[33013]: I0313 11:15:15.653708 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.654011 master-0 kubenswrapper[33013]: I0313 11:15:15.653822 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ea3a69-e811-41fd-af69-2561dea4762a-config-data\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.654011 master-0 kubenswrapper[33013]: I0313 11:15:15.653883 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73ea3a69-e811-41fd-af69-2561dea4762a-logs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.654011 master-0 kubenswrapper[33013]: I0313 11:15:15.653919 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73ea3a69-e811-41fd-af69-2561dea4762a-scripts\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.654011 master-0 kubenswrapper[33013]: I0313 11:15:15.653951 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73ea3a69-e811-41fd-af69-2561dea4762a-httpd-run\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.654011 master-0 kubenswrapper[33013]: I0313 11:15:15.653979 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ea3a69-e811-41fd-af69-2561dea4762a-public-tls-certs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.654214 master-0 kubenswrapper[33013]: I0313 11:15:15.654024 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ea3a69-e811-41fd-af69-2561dea4762a-combined-ca-bundle\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.654214 master-0 kubenswrapper[33013]: I0313 11:15:15.654176 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z82m\" (UniqueName: \"kubernetes.io/projected/73ea3a69-e811-41fd-af69-2561dea4762a-kube-api-access-8z82m\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.658293 master-0 kubenswrapper[33013]: I0313 11:15:15.658257 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73ea3a69-e811-41fd-af69-2561dea4762a-httpd-run\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.658526 master-0 kubenswrapper[33013]: I0313 11:15:15.658500 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73ea3a69-e811-41fd-af69-2561dea4762a-logs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.660928 master-0 kubenswrapper[33013]: I0313 11:15:15.660883 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ea3a69-e811-41fd-af69-2561dea4762a-config-data\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.664620 master-0 kubenswrapper[33013]: I0313 11:15:15.662499 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73ea3a69-e811-41fd-af69-2561dea4762a-public-tls-certs\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.664620 master-0 kubenswrapper[33013]: I0313 11:15:15.663658 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73ea3a69-e811-41fd-af69-2561dea4762a-scripts\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.664814 master-0 kubenswrapper[33013]: I0313 11:15:15.664711 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ea3a69-e811-41fd-af69-2561dea4762a-combined-ca-bundle\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.666413 master-0 kubenswrapper[33013]: I0313 11:15:15.665691 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:15:15.666413 master-0 kubenswrapper[33013]: I0313 11:15:15.665753 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/02d92e594b7cf20d10752edde97d9397ac0766c013b947c8de1147a201f75769/globalmount\"" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:15.683679 master-0 kubenswrapper[33013]: I0313 11:15:15.677305 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z82m\" (UniqueName: \"kubernetes.io/projected/73ea3a69-e811-41fd-af69-2561dea4762a-kube-api-access-8z82m\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:16.105444 master-0 kubenswrapper[33013]: I0313 11:15:16.104841 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:16.188701 master-0 kubenswrapper[33013]: I0313 11:15:16.178617 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-scripts\") pod \"bed3cf98-6b1c-4fb8-b082-57025157fab4\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " Mar 13 11:15:16.188701 master-0 kubenswrapper[33013]: I0313 11:15:16.178726 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bed3cf98-6b1c-4fb8-b082-57025157fab4-logs\") pod \"bed3cf98-6b1c-4fb8-b082-57025157fab4\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " Mar 13 11:15:16.188701 master-0 kubenswrapper[33013]: I0313 11:15:16.179449 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"bed3cf98-6b1c-4fb8-b082-57025157fab4\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " Mar 13 11:15:16.188701 master-0 kubenswrapper[33013]: I0313 11:15:16.179470 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bed3cf98-6b1c-4fb8-b082-57025157fab4-logs" (OuterVolumeSpecName: "logs") pod "bed3cf98-6b1c-4fb8-b082-57025157fab4" (UID: "bed3cf98-6b1c-4fb8-b082-57025157fab4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:15:16.188701 master-0 kubenswrapper[33013]: I0313 11:15:16.179662 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-internal-tls-certs\") pod \"bed3cf98-6b1c-4fb8-b082-57025157fab4\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " Mar 13 11:15:16.188701 master-0 kubenswrapper[33013]: I0313 11:15:16.180515 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bed3cf98-6b1c-4fb8-b082-57025157fab4-httpd-run\") pod \"bed3cf98-6b1c-4fb8-b082-57025157fab4\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " Mar 13 11:15:16.188701 master-0 kubenswrapper[33013]: I0313 11:15:16.180595 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-config-data\") pod \"bed3cf98-6b1c-4fb8-b082-57025157fab4\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " Mar 13 11:15:16.188701 master-0 kubenswrapper[33013]: I0313 11:15:16.180648 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-combined-ca-bundle\") pod \"bed3cf98-6b1c-4fb8-b082-57025157fab4\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " Mar 13 11:15:16.188701 master-0 kubenswrapper[33013]: I0313 11:15:16.180857 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5xfh\" (UniqueName: \"kubernetes.io/projected/bed3cf98-6b1c-4fb8-b082-57025157fab4-kube-api-access-p5xfh\") pod \"bed3cf98-6b1c-4fb8-b082-57025157fab4\" (UID: \"bed3cf98-6b1c-4fb8-b082-57025157fab4\") " Mar 13 11:15:16.188701 master-0 kubenswrapper[33013]: I0313 11:15:16.182012 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bed3cf98-6b1c-4fb8-b082-57025157fab4-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:16.188701 master-0 kubenswrapper[33013]: I0313 11:15:16.182924 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bed3cf98-6b1c-4fb8-b082-57025157fab4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bed3cf98-6b1c-4fb8-b082-57025157fab4" (UID: "bed3cf98-6b1c-4fb8-b082-57025157fab4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:15:16.188701 master-0 kubenswrapper[33013]: I0313 11:15:16.186724 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bed3cf98-6b1c-4fb8-b082-57025157fab4-kube-api-access-p5xfh" (OuterVolumeSpecName: "kube-api-access-p5xfh") pod "bed3cf98-6b1c-4fb8-b082-57025157fab4" (UID: "bed3cf98-6b1c-4fb8-b082-57025157fab4"). InnerVolumeSpecName "kube-api-access-p5xfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:15:16.192915 master-0 kubenswrapper[33013]: I0313 11:15:16.192809 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-scripts" (OuterVolumeSpecName: "scripts") pod "bed3cf98-6b1c-4fb8-b082-57025157fab4" (UID: "bed3cf98-6b1c-4fb8-b082-57025157fab4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:16.217798 master-0 kubenswrapper[33013]: I0313 11:15:16.216998 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bed3cf98-6b1c-4fb8-b082-57025157fab4" (UID: "bed3cf98-6b1c-4fb8-b082-57025157fab4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:16.239188 master-0 kubenswrapper[33013]: I0313 11:15:16.239126 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bed3cf98-6b1c-4fb8-b082-57025157fab4" (UID: "bed3cf98-6b1c-4fb8-b082-57025157fab4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:16.259091 master-0 kubenswrapper[33013]: I0313 11:15:16.259028 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-config-data" (OuterVolumeSpecName: "config-data") pod "bed3cf98-6b1c-4fb8-b082-57025157fab4" (UID: "bed3cf98-6b1c-4fb8-b082-57025157fab4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:16.285271 master-0 kubenswrapper[33013]: I0313 11:15:16.285214 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5xfh\" (UniqueName: \"kubernetes.io/projected/bed3cf98-6b1c-4fb8-b082-57025157fab4-kube-api-access-p5xfh\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:16.285271 master-0 kubenswrapper[33013]: I0313 11:15:16.285260 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:16.285271 master-0 kubenswrapper[33013]: I0313 11:15:16.285271 33013 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:16.285271 master-0 kubenswrapper[33013]: I0313 11:15:16.285281 33013 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bed3cf98-6b1c-4fb8-b082-57025157fab4-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:16.285271 master-0 kubenswrapper[33013]: I0313 11:15:16.285290 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:16.285698 master-0 kubenswrapper[33013]: I0313 11:15:16.285298 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed3cf98-6b1c-4fb8-b082-57025157fab4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:16.291041 master-0 kubenswrapper[33013]: I0313 11:15:16.290861 33013 generic.go:334] "Generic (PLEG): container finished" podID="bed3cf98-6b1c-4fb8-b082-57025157fab4" containerID="d776249bd2a7755a857f4fd2cfc9af209747a181aa8ecdb3e2e90a7ed20d0de8" exitCode=0 Mar 13 11:15:16.291041 master-0 kubenswrapper[33013]: I0313 11:15:16.291001 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:16.291041 master-0 kubenswrapper[33013]: I0313 11:15:16.291002 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"bed3cf98-6b1c-4fb8-b082-57025157fab4","Type":"ContainerDied","Data":"d776249bd2a7755a857f4fd2cfc9af209747a181aa8ecdb3e2e90a7ed20d0de8"} Mar 13 11:15:16.291041 master-0 kubenswrapper[33013]: I0313 11:15:16.291042 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"bed3cf98-6b1c-4fb8-b082-57025157fab4","Type":"ContainerDied","Data":"f360c2612e12204b490419ca34c3e5bf63c37fe893fa79436f38490561d2739b"} Mar 13 11:15:16.291412 master-0 kubenswrapper[33013]: I0313 11:15:16.291061 33013 scope.go:117] "RemoveContainer" containerID="d776249bd2a7755a857f4fd2cfc9af209747a181aa8ecdb3e2e90a7ed20d0de8" Mar 13 11:15:16.297335 master-0 kubenswrapper[33013]: I0313 11:15:16.296994 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 13 11:15:16.338986 master-0 kubenswrapper[33013]: I0313 11:15:16.338920 33013 scope.go:117] "RemoveContainer" containerID="b126e8ed7d274571203637c395cc9ebdcfd10117bd1544521b82b098c7be60dd" Mar 13 11:15:16.378335 master-0 kubenswrapper[33013]: I0313 11:15:16.378300 33013 scope.go:117] "RemoveContainer" containerID="d776249bd2a7755a857f4fd2cfc9af209747a181aa8ecdb3e2e90a7ed20d0de8" Mar 13 11:15:16.379182 master-0 kubenswrapper[33013]: E0313 11:15:16.379136 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d776249bd2a7755a857f4fd2cfc9af209747a181aa8ecdb3e2e90a7ed20d0de8\": container with ID starting with d776249bd2a7755a857f4fd2cfc9af209747a181aa8ecdb3e2e90a7ed20d0de8 not found: ID does not exist" containerID="d776249bd2a7755a857f4fd2cfc9af209747a181aa8ecdb3e2e90a7ed20d0de8" Mar 13 11:15:16.379257 master-0 kubenswrapper[33013]: I0313 11:15:16.379190 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d776249bd2a7755a857f4fd2cfc9af209747a181aa8ecdb3e2e90a7ed20d0de8"} err="failed to get container status \"d776249bd2a7755a857f4fd2cfc9af209747a181aa8ecdb3e2e90a7ed20d0de8\": rpc error: code = NotFound desc = could not find container \"d776249bd2a7755a857f4fd2cfc9af209747a181aa8ecdb3e2e90a7ed20d0de8\": container with ID starting with d776249bd2a7755a857f4fd2cfc9af209747a181aa8ecdb3e2e90a7ed20d0de8 not found: ID does not exist" Mar 13 11:15:16.379257 master-0 kubenswrapper[33013]: I0313 11:15:16.379222 33013 scope.go:117] "RemoveContainer" containerID="b126e8ed7d274571203637c395cc9ebdcfd10117bd1544521b82b098c7be60dd" Mar 13 11:15:16.379981 master-0 kubenswrapper[33013]: E0313 11:15:16.379946 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b126e8ed7d274571203637c395cc9ebdcfd10117bd1544521b82b098c7be60dd\": container with ID starting with b126e8ed7d274571203637c395cc9ebdcfd10117bd1544521b82b098c7be60dd not found: ID does not exist" containerID="b126e8ed7d274571203637c395cc9ebdcfd10117bd1544521b82b098c7be60dd" Mar 13 11:15:16.380037 master-0 kubenswrapper[33013]: I0313 11:15:16.379990 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b126e8ed7d274571203637c395cc9ebdcfd10117bd1544521b82b098c7be60dd"} err="failed to get container status \"b126e8ed7d274571203637c395cc9ebdcfd10117bd1544521b82b098c7be60dd\": rpc error: code = NotFound desc = could not find container \"b126e8ed7d274571203637c395cc9ebdcfd10117bd1544521b82b098c7be60dd\": container with ID starting with b126e8ed7d274571203637c395cc9ebdcfd10117bd1544521b82b098c7be60dd not found: ID does not exist" Mar 13 11:15:16.728101 master-0 kubenswrapper[33013]: I0313 11:15:16.727879 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ecfb801-ccd8-478c-83e0-d5c4b3cacc35" path="/var/lib/kubelet/pods/0ecfb801-ccd8-478c-83e0-d5c4b3cacc35/volumes" Mar 13 11:15:17.090133 master-0 kubenswrapper[33013]: I0313 11:15:17.090065 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c" (OuterVolumeSpecName: "glance") pod "bed3cf98-6b1c-4fb8-b082-57025157fab4" (UID: "bed3cf98-6b1c-4fb8-b082-57025157fab4"). InnerVolumeSpecName "pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 13 11:15:17.103092 master-0 kubenswrapper[33013]: I0313 11:15:17.103041 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4701fe27-d49b-425e-b633-bef2656c1d02\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dc277845-cac0-4b71-ac3a-83868b9b8a78\") pod \"glance-87aa4-default-external-api-0\" (UID: \"73ea3a69-e811-41fd-af69-2561dea4762a\") " pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:17.104901 master-0 kubenswrapper[33013]: I0313 11:15:17.104847 33013 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") on node \"master-0\" " Mar 13 11:15:17.130092 master-0 kubenswrapper[33013]: I0313 11:15:17.130052 33013 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 13 11:15:17.130883 master-0 kubenswrapper[33013]: I0313 11:15:17.130842 33013 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96" (UniqueName: "kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c") on node "master-0" Mar 13 11:15:17.207386 master-0 kubenswrapper[33013]: I0313 11:15:17.207347 33013 reconciler_common.go:293] "Volume detached for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:17.237809 master-0 kubenswrapper[33013]: I0313 11:15:17.237733 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:15:17.252898 master-0 kubenswrapper[33013]: I0313 11:15:17.252843 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:15:17.278889 master-0 kubenswrapper[33013]: I0313 11:15:17.278833 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:15:17.279497 master-0 kubenswrapper[33013]: E0313 11:15:17.279471 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed3cf98-6b1c-4fb8-b082-57025157fab4" containerName="glance-httpd" Mar 13 11:15:17.279544 master-0 kubenswrapper[33013]: I0313 11:15:17.279497 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed3cf98-6b1c-4fb8-b082-57025157fab4" containerName="glance-httpd" Mar 13 11:15:17.279608 master-0 kubenswrapper[33013]: E0313 11:15:17.279571 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed3cf98-6b1c-4fb8-b082-57025157fab4" containerName="glance-log" Mar 13 11:15:17.279652 master-0 kubenswrapper[33013]: I0313 11:15:17.279607 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed3cf98-6b1c-4fb8-b082-57025157fab4" containerName="glance-log" Mar 13 11:15:17.280019 master-0 kubenswrapper[33013]: I0313 11:15:17.279890 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed3cf98-6b1c-4fb8-b082-57025157fab4" containerName="glance-httpd" Mar 13 11:15:17.280019 master-0 kubenswrapper[33013]: I0313 11:15:17.279922 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed3cf98-6b1c-4fb8-b082-57025157fab4" containerName="glance-log" Mar 13 11:15:17.281473 master-0 kubenswrapper[33013]: I0313 11:15:17.281453 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.285616 master-0 kubenswrapper[33013]: I0313 11:15:17.285243 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 13 11:15:17.288073 master-0 kubenswrapper[33013]: I0313 11:15:17.287000 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-87aa4-default-internal-config-data" Mar 13 11:15:17.309704 master-0 kubenswrapper[33013]: I0313 11:15:17.306971 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:15:17.332248 master-0 kubenswrapper[33013]: I0313 11:15:17.332119 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:17.411828 master-0 kubenswrapper[33013]: I0313 11:15:17.411755 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.411828 master-0 kubenswrapper[33013]: I0313 11:15:17.411823 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-internal-tls-certs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.412135 master-0 kubenswrapper[33013]: I0313 11:15:17.411888 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-scripts\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.412135 master-0 kubenswrapper[33013]: I0313 11:15:17.411925 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzdqf\" (UniqueName: \"kubernetes.io/projected/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-kube-api-access-gzdqf\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.412135 master-0 kubenswrapper[33013]: I0313 11:15:17.411980 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-logs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.412135 master-0 kubenswrapper[33013]: I0313 11:15:17.412089 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-config-data\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.412135 master-0 kubenswrapper[33013]: I0313 11:15:17.412131 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-combined-ca-bundle\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.412361 master-0 kubenswrapper[33013]: I0313 11:15:17.412226 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-httpd-run\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.518704 master-0 kubenswrapper[33013]: I0313 11:15:17.515685 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-scripts\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.518704 master-0 kubenswrapper[33013]: I0313 11:15:17.515753 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzdqf\" (UniqueName: \"kubernetes.io/projected/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-kube-api-access-gzdqf\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.518704 master-0 kubenswrapper[33013]: I0313 11:15:17.515801 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-logs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.518704 master-0 kubenswrapper[33013]: I0313 11:15:17.515896 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-config-data\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.518704 master-0 kubenswrapper[33013]: I0313 11:15:17.515930 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-combined-ca-bundle\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.518704 master-0 kubenswrapper[33013]: I0313 11:15:17.516016 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-httpd-run\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.518704 master-0 kubenswrapper[33013]: I0313 11:15:17.516071 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.518704 master-0 kubenswrapper[33013]: I0313 11:15:17.516094 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-internal-tls-certs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.521134 master-0 kubenswrapper[33013]: I0313 11:15:17.518295 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-logs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.522483 master-0 kubenswrapper[33013]: I0313 11:15:17.522424 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-scripts\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.523218 master-0 kubenswrapper[33013]: I0313 11:15:17.522958 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-httpd-run\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.527858 master-0 kubenswrapper[33013]: I0313 11:15:17.526032 33013 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 13 11:15:17.527858 master-0 kubenswrapper[33013]: I0313 11:15:17.526078 33013 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/946cdf3189fcbc367fb7e7cfd5e4aad164d151a73965b6f865a738752ef6bb2a/globalmount\"" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.529979 master-0 kubenswrapper[33013]: I0313 11:15:17.529099 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-combined-ca-bundle\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.529979 master-0 kubenswrapper[33013]: I0313 11:15:17.529816 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-config-data\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.529979 master-0 kubenswrapper[33013]: I0313 11:15:17.529919 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-internal-tls-certs\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.540002 master-0 kubenswrapper[33013]: I0313 11:15:17.539948 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzdqf\" (UniqueName: \"kubernetes.io/projected/ae3532e4-2e8a-40a1-b00e-88a7dc38871f-kube-api-access-gzdqf\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:17.896042 master-0 kubenswrapper[33013]: W0313 11:15:17.892294 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73ea3a69_e811_41fd_af69_2561dea4762a.slice/crio-97f8a178ec244334bf3a1f0c52f86bf79a9486f1786a84ce1218f6ffb9e04bc5 WatchSource:0}: Error finding container 97f8a178ec244334bf3a1f0c52f86bf79a9486f1786a84ce1218f6ffb9e04bc5: Status 404 returned error can't find the container with id 97f8a178ec244334bf3a1f0c52f86bf79a9486f1786a84ce1218f6ffb9e04bc5 Mar 13 11:15:17.896042 master-0 kubenswrapper[33013]: I0313 11:15:17.892755 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-87aa4-default-external-api-0"] Mar 13 11:15:18.335050 master-0 kubenswrapper[33013]: I0313 11:15:18.334992 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-external-api-0" event={"ID":"73ea3a69-e811-41fd-af69-2561dea4762a","Type":"ContainerStarted","Data":"97f8a178ec244334bf3a1f0c52f86bf79a9486f1786a84ce1218f6ffb9e04bc5"} Mar 13 11:15:18.396775 master-0 kubenswrapper[33013]: I0313 11:15:18.396715 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-399a20e9-80cd-4300-993d-b5f7c7d98b96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^64bd6905-2584-4365-9210-737cd9f0aa1c\") pod \"glance-87aa4-default-internal-api-0\" (UID: \"ae3532e4-2e8a-40a1-b00e-88a7dc38871f\") " pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:18.557402 master-0 kubenswrapper[33013]: I0313 11:15:18.557337 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:18.736601 master-0 kubenswrapper[33013]: I0313 11:15:18.735381 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bed3cf98-6b1c-4fb8-b082-57025157fab4" path="/var/lib/kubelet/pods/bed3cf98-6b1c-4fb8-b082-57025157fab4/volumes" Mar 13 11:15:19.193311 master-0 kubenswrapper[33013]: W0313 11:15:19.193240 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae3532e4_2e8a_40a1_b00e_88a7dc38871f.slice/crio-ed5923aa89a92adf670f20e43616a4640bb905cef8e969b9d4c1c4df7d960b42 WatchSource:0}: Error finding container ed5923aa89a92adf670f20e43616a4640bb905cef8e969b9d4c1c4df7d960b42: Status 404 returned error can't find the container with id ed5923aa89a92adf670f20e43616a4640bb905cef8e969b9d4c1c4df7d960b42 Mar 13 11:15:19.195067 master-0 kubenswrapper[33013]: I0313 11:15:19.195014 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-87aa4-default-internal-api-0"] Mar 13 11:15:19.347776 master-0 kubenswrapper[33013]: I0313 11:15:19.347708 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-external-api-0" event={"ID":"73ea3a69-e811-41fd-af69-2561dea4762a","Type":"ContainerStarted","Data":"ed8b3301f751ea097de9340e515c77a76f658710e4070c0b93122b6c4337b391"} Mar 13 11:15:19.347776 master-0 kubenswrapper[33013]: I0313 11:15:19.347766 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-external-api-0" event={"ID":"73ea3a69-e811-41fd-af69-2561dea4762a","Type":"ContainerStarted","Data":"e04a85e3df190435174088b2e66f9c2cfe3a18014525a62054a0a27c08e9d231"} Mar 13 11:15:19.352737 master-0 kubenswrapper[33013]: I0313 11:15:19.352701 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"ae3532e4-2e8a-40a1-b00e-88a7dc38871f","Type":"ContainerStarted","Data":"ed5923aa89a92adf670f20e43616a4640bb905cef8e969b9d4c1c4df7d960b42"} Mar 13 11:15:19.383139 master-0 kubenswrapper[33013]: I0313 11:15:19.383038 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-87aa4-default-external-api-0" podStartSLOduration=4.383009773 podStartE2EDuration="4.383009773s" podCreationTimestamp="2026-03-13 11:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:15:19.372516003 +0000 UTC m=+1102.848469352" watchObservedRunningTime="2026-03-13 11:15:19.383009773 +0000 UTC m=+1102.858963122" Mar 13 11:15:20.365041 master-0 kubenswrapper[33013]: I0313 11:15:20.364966 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 13 11:15:20.365041 master-0 kubenswrapper[33013]: I0313 11:15:20.365047 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 13 11:15:20.370799 master-0 kubenswrapper[33013]: I0313 11:15:20.369150 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"ae3532e4-2e8a-40a1-b00e-88a7dc38871f","Type":"ContainerStarted","Data":"113ede290cdbc61fa0c8f0a9e0ec5e75f29d8c1aad2ed2dd78c6c9497de49189"} Mar 13 11:15:20.371020 master-0 kubenswrapper[33013]: I0313 11:15:20.370824 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-87aa4-default-internal-api-0" event={"ID":"ae3532e4-2e8a-40a1-b00e-88a7dc38871f","Type":"ContainerStarted","Data":"5b35fc2524531ba8a55158e34e03b78fea5f89200ce496b43ccbba4509bda8fc"} Mar 13 11:15:20.400372 master-0 kubenswrapper[33013]: I0313 11:15:20.400327 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 13 11:15:20.404635 master-0 kubenswrapper[33013]: I0313 11:15:20.402821 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 13 11:15:20.407001 master-0 kubenswrapper[33013]: I0313 11:15:20.406901 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-87aa4-default-internal-api-0" podStartSLOduration=3.40686966 podStartE2EDuration="3.40686966s" podCreationTimestamp="2026-03-13 11:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:15:20.401052794 +0000 UTC m=+1103.877006153" watchObservedRunningTime="2026-03-13 11:15:20.40686966 +0000 UTC m=+1103.882823019" Mar 13 11:15:21.388993 master-0 kubenswrapper[33013]: I0313 11:15:21.388922 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 13 11:15:21.395782 master-0 kubenswrapper[33013]: I0313 11:15:21.395219 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 13 11:15:27.332604 master-0 kubenswrapper[33013]: I0313 11:15:27.332511 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:27.332604 master-0 kubenswrapper[33013]: I0313 11:15:27.332578 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:27.369798 master-0 kubenswrapper[33013]: I0313 11:15:27.369728 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:27.385145 master-0 kubenswrapper[33013]: I0313 11:15:27.379898 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:27.463447 master-0 kubenswrapper[33013]: I0313 11:15:27.463375 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:27.463447 master-0 kubenswrapper[33013]: I0313 11:15:27.463462 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:28.478503 master-0 kubenswrapper[33013]: I0313 11:15:28.478434 33013 generic.go:334] "Generic (PLEG): container finished" podID="3a8b658f-2cd1-4d6c-806b-c234244637df" containerID="551d650fdc705fc124b5738c8f8954e4724e776439863bc83cee2c1e4d71ece3" exitCode=0 Mar 13 11:15:28.479208 master-0 kubenswrapper[33013]: I0313 11:15:28.478544 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lh728" event={"ID":"3a8b658f-2cd1-4d6c-806b-c234244637df","Type":"ContainerDied","Data":"551d650fdc705fc124b5738c8f8954e4724e776439863bc83cee2c1e4d71ece3"} Mar 13 11:15:28.559268 master-0 kubenswrapper[33013]: I0313 11:15:28.559189 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:28.559268 master-0 kubenswrapper[33013]: I0313 11:15:28.559266 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:28.605108 master-0 kubenswrapper[33013]: I0313 11:15:28.605022 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:28.623727 master-0 kubenswrapper[33013]: I0313 11:15:28.623656 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:29.491299 master-0 kubenswrapper[33013]: I0313 11:15:29.490474 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:29.491299 master-0 kubenswrapper[33013]: I0313 11:15:29.490539 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:29.958818 master-0 kubenswrapper[33013]: I0313 11:15:29.958761 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:15:29.982939 master-0 kubenswrapper[33013]: I0313 11:15:29.982872 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-combined-ca-bundle\") pod \"3a8b658f-2cd1-4d6c-806b-c234244637df\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " Mar 13 11:15:29.983287 master-0 kubenswrapper[33013]: I0313 11:15:29.983085 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-scripts\") pod \"3a8b658f-2cd1-4d6c-806b-c234244637df\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " Mar 13 11:15:29.983287 master-0 kubenswrapper[33013]: I0313 11:15:29.983189 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-config-data\") pod \"3a8b658f-2cd1-4d6c-806b-c234244637df\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " Mar 13 11:15:29.983287 master-0 kubenswrapper[33013]: I0313 11:15:29.983268 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxgsk\" (UniqueName: \"kubernetes.io/projected/3a8b658f-2cd1-4d6c-806b-c234244637df-kube-api-access-xxgsk\") pod \"3a8b658f-2cd1-4d6c-806b-c234244637df\" (UID: \"3a8b658f-2cd1-4d6c-806b-c234244637df\") " Mar 13 11:15:30.004688 master-0 kubenswrapper[33013]: I0313 11:15:29.989716 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-scripts" (OuterVolumeSpecName: "scripts") pod "3a8b658f-2cd1-4d6c-806b-c234244637df" (UID: "3a8b658f-2cd1-4d6c-806b-c234244637df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:30.004688 master-0 kubenswrapper[33013]: I0313 11:15:29.989753 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8b658f-2cd1-4d6c-806b-c234244637df-kube-api-access-xxgsk" (OuterVolumeSpecName: "kube-api-access-xxgsk") pod "3a8b658f-2cd1-4d6c-806b-c234244637df" (UID: "3a8b658f-2cd1-4d6c-806b-c234244637df"). InnerVolumeSpecName "kube-api-access-xxgsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:15:30.051926 master-0 kubenswrapper[33013]: I0313 11:15:30.051844 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a8b658f-2cd1-4d6c-806b-c234244637df" (UID: "3a8b658f-2cd1-4d6c-806b-c234244637df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:30.086912 master-0 kubenswrapper[33013]: I0313 11:15:30.086819 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:30.086912 master-0 kubenswrapper[33013]: I0313 11:15:30.086906 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxgsk\" (UniqueName: \"kubernetes.io/projected/3a8b658f-2cd1-4d6c-806b-c234244637df-kube-api-access-xxgsk\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:30.086912 master-0 kubenswrapper[33013]: I0313 11:15:30.086928 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:30.087424 master-0 kubenswrapper[33013]: I0313 11:15:30.086814 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-config-data" (OuterVolumeSpecName: "config-data") pod "3a8b658f-2cd1-4d6c-806b-c234244637df" (UID: "3a8b658f-2cd1-4d6c-806b-c234244637df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:30.190107 master-0 kubenswrapper[33013]: I0313 11:15:30.190045 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a8b658f-2cd1-4d6c-806b-c234244637df-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:30.260610 master-0 kubenswrapper[33013]: I0313 11:15:30.260532 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:30.260891 master-0 kubenswrapper[33013]: I0313 11:15:30.260661 33013 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 11:15:30.276331 master-0 kubenswrapper[33013]: I0313 11:15:30.276251 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-87aa4-default-external-api-0" Mar 13 11:15:30.517759 master-0 kubenswrapper[33013]: I0313 11:15:30.515213 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lh728" Mar 13 11:15:30.520278 master-0 kubenswrapper[33013]: I0313 11:15:30.520195 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lh728" event={"ID":"3a8b658f-2cd1-4d6c-806b-c234244637df","Type":"ContainerDied","Data":"869b08b8ccee9b571c50dbc71b22a9334018e6f5b0d302d8bbfaf7aaeaa9b00c"} Mar 13 11:15:30.520398 master-0 kubenswrapper[33013]: I0313 11:15:30.520280 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="869b08b8ccee9b571c50dbc71b22a9334018e6f5b0d302d8bbfaf7aaeaa9b00c" Mar 13 11:15:30.658758 master-0 kubenswrapper[33013]: I0313 11:15:30.656669 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 11:15:30.658758 master-0 kubenswrapper[33013]: E0313 11:15:30.657372 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8b658f-2cd1-4d6c-806b-c234244637df" containerName="nova-cell0-conductor-db-sync" Mar 13 11:15:30.658758 master-0 kubenswrapper[33013]: I0313 11:15:30.657389 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8b658f-2cd1-4d6c-806b-c234244637df" containerName="nova-cell0-conductor-db-sync" Mar 13 11:15:30.658758 master-0 kubenswrapper[33013]: I0313 11:15:30.657772 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8b658f-2cd1-4d6c-806b-c234244637df" containerName="nova-cell0-conductor-db-sync" Mar 13 11:15:30.658758 master-0 kubenswrapper[33013]: I0313 11:15:30.658636 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 13 11:15:30.667979 master-0 kubenswrapper[33013]: I0313 11:15:30.667777 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 11:15:30.674951 master-0 kubenswrapper[33013]: I0313 11:15:30.674902 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 13 11:15:30.706439 master-0 kubenswrapper[33013]: I0313 11:15:30.706363 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjc5h\" (UniqueName: \"kubernetes.io/projected/f40ae880-890a-49e1-af72-be2bb125cb9c-kube-api-access-bjc5h\") pod \"nova-cell0-conductor-0\" (UID: \"f40ae880-890a-49e1-af72-be2bb125cb9c\") " pod="openstack/nova-cell0-conductor-0" Mar 13 11:15:30.706758 master-0 kubenswrapper[33013]: I0313 11:15:30.706532 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40ae880-890a-49e1-af72-be2bb125cb9c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f40ae880-890a-49e1-af72-be2bb125cb9c\") " pod="openstack/nova-cell0-conductor-0" Mar 13 11:15:30.706758 master-0 kubenswrapper[33013]: I0313 11:15:30.706709 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40ae880-890a-49e1-af72-be2bb125cb9c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f40ae880-890a-49e1-af72-be2bb125cb9c\") " pod="openstack/nova-cell0-conductor-0" Mar 13 11:15:30.809989 master-0 kubenswrapper[33013]: I0313 11:15:30.809925 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjc5h\" (UniqueName: \"kubernetes.io/projected/f40ae880-890a-49e1-af72-be2bb125cb9c-kube-api-access-bjc5h\") pod \"nova-cell0-conductor-0\" (UID: \"f40ae880-890a-49e1-af72-be2bb125cb9c\") " pod="openstack/nova-cell0-conductor-0" Mar 13 11:15:30.811111 master-0 kubenswrapper[33013]: I0313 11:15:30.811085 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40ae880-890a-49e1-af72-be2bb125cb9c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f40ae880-890a-49e1-af72-be2bb125cb9c\") " pod="openstack/nova-cell0-conductor-0" Mar 13 11:15:30.812771 master-0 kubenswrapper[33013]: I0313 11:15:30.812750 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40ae880-890a-49e1-af72-be2bb125cb9c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f40ae880-890a-49e1-af72-be2bb125cb9c\") " pod="openstack/nova-cell0-conductor-0" Mar 13 11:15:30.816027 master-0 kubenswrapper[33013]: I0313 11:15:30.816002 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40ae880-890a-49e1-af72-be2bb125cb9c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f40ae880-890a-49e1-af72-be2bb125cb9c\") " pod="openstack/nova-cell0-conductor-0" Mar 13 11:15:30.816733 master-0 kubenswrapper[33013]: I0313 11:15:30.816677 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40ae880-890a-49e1-af72-be2bb125cb9c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f40ae880-890a-49e1-af72-be2bb125cb9c\") " pod="openstack/nova-cell0-conductor-0" Mar 13 11:15:30.828449 master-0 kubenswrapper[33013]: I0313 11:15:30.828399 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjc5h\" (UniqueName: \"kubernetes.io/projected/f40ae880-890a-49e1-af72-be2bb125cb9c-kube-api-access-bjc5h\") pod \"nova-cell0-conductor-0\" (UID: \"f40ae880-890a-49e1-af72-be2bb125cb9c\") " pod="openstack/nova-cell0-conductor-0" Mar 13 11:15:31.035775 master-0 kubenswrapper[33013]: I0313 11:15:31.034062 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 13 11:15:31.530445 master-0 kubenswrapper[33013]: I0313 11:15:31.530379 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 13 11:15:31.562010 master-0 kubenswrapper[33013]: W0313 11:15:31.558031 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf40ae880_890a_49e1_af72_be2bb125cb9c.slice/crio-e46667584212a7a26e1ac8ec7d93ed7e315aceb6d33c334e66d6cf465b630435 WatchSource:0}: Error finding container e46667584212a7a26e1ac8ec7d93ed7e315aceb6d33c334e66d6cf465b630435: Status 404 returned error can't find the container with id e46667584212a7a26e1ac8ec7d93ed7e315aceb6d33c334e66d6cf465b630435 Mar 13 11:15:31.714853 master-0 kubenswrapper[33013]: I0313 11:15:31.714776 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:31.715386 master-0 kubenswrapper[33013]: I0313 11:15:31.715331 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-87aa4-default-internal-api-0" Mar 13 11:15:32.576924 master-0 kubenswrapper[33013]: I0313 11:15:32.576418 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f40ae880-890a-49e1-af72-be2bb125cb9c","Type":"ContainerStarted","Data":"1aaaf094d1df1482686b4911b4ac6a5758f66af66e8fd2acf1e71050b1a5c45c"} Mar 13 11:15:32.576924 master-0 kubenswrapper[33013]: I0313 11:15:32.576480 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 13 11:15:32.576924 master-0 kubenswrapper[33013]: I0313 11:15:32.576496 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f40ae880-890a-49e1-af72-be2bb125cb9c","Type":"ContainerStarted","Data":"e46667584212a7a26e1ac8ec7d93ed7e315aceb6d33c334e66d6cf465b630435"} Mar 13 11:15:32.601677 master-0 kubenswrapper[33013]: I0313 11:15:32.601548 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.601523438 podStartE2EDuration="2.601523438s" podCreationTimestamp="2026-03-13 11:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:15:32.597951226 +0000 UTC m=+1116.073904575" watchObservedRunningTime="2026-03-13 11:15:32.601523438 +0000 UTC m=+1116.077476787" Mar 13 11:15:36.088268 master-0 kubenswrapper[33013]: I0313 11:15:36.088188 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 13 11:15:36.735775 master-0 kubenswrapper[33013]: I0313 11:15:36.735693 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-896qh"] Mar 13 11:15:36.743339 master-0 kubenswrapper[33013]: I0313 11:15:36.743174 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:36.747304 master-0 kubenswrapper[33013]: I0313 11:15:36.747257 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 13 11:15:36.750249 master-0 kubenswrapper[33013]: I0313 11:15:36.750188 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 13 11:15:36.770783 master-0 kubenswrapper[33013]: I0313 11:15:36.770719 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-896qh"] Mar 13 11:15:36.903564 master-0 kubenswrapper[33013]: I0313 11:15:36.903490 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-config-data\") pod \"nova-cell0-cell-mapping-896qh\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:36.903829 master-0 kubenswrapper[33013]: I0313 11:15:36.903719 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-896qh\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:36.903876 master-0 kubenswrapper[33013]: I0313 11:15:36.903852 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gljz5\" (UniqueName: \"kubernetes.io/projected/3991862a-ccb1-46f5-bdcc-74d3926df07a-kube-api-access-gljz5\") pod \"nova-cell0-cell-mapping-896qh\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:36.904149 master-0 kubenswrapper[33013]: I0313 11:15:36.903918 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-scripts\") pod \"nova-cell0-cell-mapping-896qh\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:36.981605 master-0 kubenswrapper[33013]: I0313 11:15:36.981520 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 13 11:15:36.987787 master-0 kubenswrapper[33013]: I0313 11:15:36.983353 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 11:15:36.993608 master-0 kubenswrapper[33013]: I0313 11:15:36.990195 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Mar 13 11:15:36.993608 master-0 kubenswrapper[33013]: I0313 11:15:36.992773 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 13 11:15:37.020014 master-0 kubenswrapper[33013]: I0313 11:15:37.017793 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gljz5\" (UniqueName: \"kubernetes.io/projected/3991862a-ccb1-46f5-bdcc-74d3926df07a-kube-api-access-gljz5\") pod \"nova-cell0-cell-mapping-896qh\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:37.020014 master-0 kubenswrapper[33013]: I0313 11:15:37.017870 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-scripts\") pod \"nova-cell0-cell-mapping-896qh\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:37.020014 master-0 kubenswrapper[33013]: I0313 11:15:37.018155 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb040ad-c64d-465b-91ed-e961db81a52d-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"1bb040ad-c64d-465b-91ed-e961db81a52d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 11:15:37.020014 master-0 kubenswrapper[33013]: I0313 11:15:37.018189 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb040ad-c64d-465b-91ed-e961db81a52d-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"1bb040ad-c64d-465b-91ed-e961db81a52d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 11:15:37.020014 master-0 kubenswrapper[33013]: I0313 11:15:37.018238 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-config-data\") pod \"nova-cell0-cell-mapping-896qh\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:37.020014 master-0 kubenswrapper[33013]: I0313 11:15:37.018329 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-672jb\" (UniqueName: \"kubernetes.io/projected/1bb040ad-c64d-465b-91ed-e961db81a52d-kube-api-access-672jb\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"1bb040ad-c64d-465b-91ed-e961db81a52d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 11:15:37.020014 master-0 kubenswrapper[33013]: I0313 11:15:37.018375 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-896qh\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:37.030909 master-0 kubenswrapper[33013]: I0313 11:15:37.029059 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-config-data\") pod \"nova-cell0-cell-mapping-896qh\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:37.030909 master-0 kubenswrapper[33013]: I0313 11:15:37.029555 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-scripts\") pod \"nova-cell0-cell-mapping-896qh\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:37.056941 master-0 kubenswrapper[33013]: I0313 11:15:37.055291 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-896qh\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:37.066630 master-0 kubenswrapper[33013]: I0313 11:15:37.061402 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gljz5\" (UniqueName: \"kubernetes.io/projected/3991862a-ccb1-46f5-bdcc-74d3926df07a-kube-api-access-gljz5\") pod \"nova-cell0-cell-mapping-896qh\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:37.075282 master-0 kubenswrapper[33013]: I0313 11:15:37.075212 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:37.121081 master-0 kubenswrapper[33013]: I0313 11:15:37.120420 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb040ad-c64d-465b-91ed-e961db81a52d-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"1bb040ad-c64d-465b-91ed-e961db81a52d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 11:15:37.121081 master-0 kubenswrapper[33013]: I0313 11:15:37.120484 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb040ad-c64d-465b-91ed-e961db81a52d-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"1bb040ad-c64d-465b-91ed-e961db81a52d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 11:15:37.121081 master-0 kubenswrapper[33013]: I0313 11:15:37.120562 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-672jb\" (UniqueName: \"kubernetes.io/projected/1bb040ad-c64d-465b-91ed-e961db81a52d-kube-api-access-672jb\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"1bb040ad-c64d-465b-91ed-e961db81a52d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 11:15:37.132567 master-0 kubenswrapper[33013]: I0313 11:15:37.132524 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb040ad-c64d-465b-91ed-e961db81a52d-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"1bb040ad-c64d-465b-91ed-e961db81a52d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 11:15:37.136240 master-0 kubenswrapper[33013]: I0313 11:15:37.135830 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb040ad-c64d-465b-91ed-e961db81a52d-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"1bb040ad-c64d-465b-91ed-e961db81a52d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 11:15:37.158620 master-0 kubenswrapper[33013]: I0313 11:15:37.144517 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 11:15:37.158620 master-0 kubenswrapper[33013]: I0313 11:15:37.146780 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 11:15:37.158620 master-0 kubenswrapper[33013]: I0313 11:15:37.152793 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 11:15:37.178526 master-0 kubenswrapper[33013]: I0313 11:15:37.177450 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:15:37.197071 master-0 kubenswrapper[33013]: I0313 11:15:37.197026 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-672jb\" (UniqueName: \"kubernetes.io/projected/1bb040ad-c64d-465b-91ed-e961db81a52d-kube-api-access-672jb\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"1bb040ad-c64d-465b-91ed-e961db81a52d\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 11:15:37.261128 master-0 kubenswrapper[33013]: I0313 11:15:37.249673 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 11:15:37.344746 master-0 kubenswrapper[33013]: I0313 11:15:37.332943 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 11:15:37.344746 master-0 kubenswrapper[33013]: I0313 11:15:37.333090 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:15:37.344746 master-0 kubenswrapper[33013]: I0313 11:15:37.335008 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " pod="openstack/nova-api-0" Mar 13 11:15:37.344746 master-0 kubenswrapper[33013]: I0313 11:15:37.335227 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-logs\") pod \"nova-api-0\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " pod="openstack/nova-api-0" Mar 13 11:15:37.344746 master-0 kubenswrapper[33013]: I0313 11:15:37.335290 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-config-data\") pod \"nova-api-0\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " pod="openstack/nova-api-0" Mar 13 11:15:37.344746 master-0 kubenswrapper[33013]: I0313 11:15:37.335313 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zvdv\" (UniqueName: \"kubernetes.io/projected/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-kube-api-access-6zvdv\") pod \"nova-api-0\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " pod="openstack/nova-api-0" Mar 13 11:15:37.344746 master-0 kubenswrapper[33013]: I0313 11:15:37.337145 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 11:15:37.344746 master-0 kubenswrapper[33013]: I0313 11:15:37.337921 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:15:37.371096 master-0 kubenswrapper[33013]: I0313 11:15:37.371035 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 13 11:15:37.450235 master-0 kubenswrapper[33013]: I0313 11:15:37.426629 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 11:15:37.450235 master-0 kubenswrapper[33013]: I0313 11:15:37.433528 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 13 11:15:37.460054 master-0 kubenswrapper[33013]: I0313 11:15:37.456129 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6cck\" (UniqueName: \"kubernetes.io/projected/02632e68-2023-48cd-9770-d99d5a7301a0-kube-api-access-k6cck\") pod \"nova-cell1-novncproxy-0\" (UID: \"02632e68-2023-48cd-9770-d99d5a7301a0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:15:37.460054 master-0 kubenswrapper[33013]: I0313 11:15:37.456215 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " pod="openstack/nova-api-0" Mar 13 11:15:37.460054 master-0 kubenswrapper[33013]: I0313 11:15:37.456424 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02632e68-2023-48cd-9770-d99d5a7301a0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"02632e68-2023-48cd-9770-d99d5a7301a0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:15:37.460054 master-0 kubenswrapper[33013]: I0313 11:15:37.456529 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02632e68-2023-48cd-9770-d99d5a7301a0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"02632e68-2023-48cd-9770-d99d5a7301a0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:15:37.460054 master-0 kubenswrapper[33013]: I0313 11:15:37.456604 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-logs\") pod \"nova-api-0\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " pod="openstack/nova-api-0" Mar 13 11:15:37.460054 master-0 kubenswrapper[33013]: I0313 11:15:37.456668 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-config-data\") pod \"nova-api-0\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " pod="openstack/nova-api-0" Mar 13 11:15:37.460054 master-0 kubenswrapper[33013]: I0313 11:15:37.456688 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zvdv\" (UniqueName: \"kubernetes.io/projected/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-kube-api-access-6zvdv\") pod \"nova-api-0\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " pod="openstack/nova-api-0" Mar 13 11:15:37.465653 master-0 kubenswrapper[33013]: I0313 11:15:37.465611 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-logs\") pod \"nova-api-0\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " pod="openstack/nova-api-0" Mar 13 11:15:37.564964 master-0 kubenswrapper[33013]: I0313 11:15:37.558411 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02632e68-2023-48cd-9770-d99d5a7301a0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"02632e68-2023-48cd-9770-d99d5a7301a0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:15:37.564964 master-0 kubenswrapper[33013]: I0313 11:15:37.558512 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02632e68-2023-48cd-9770-d99d5a7301a0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"02632e68-2023-48cd-9770-d99d5a7301a0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:15:37.564964 master-0 kubenswrapper[33013]: I0313 11:15:37.558915 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6cck\" (UniqueName: \"kubernetes.io/projected/02632e68-2023-48cd-9770-d99d5a7301a0-kube-api-access-k6cck\") pod \"nova-cell1-novncproxy-0\" (UID: \"02632e68-2023-48cd-9770-d99d5a7301a0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:15:37.564964 master-0 kubenswrapper[33013]: I0313 11:15:37.562654 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02632e68-2023-48cd-9770-d99d5a7301a0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"02632e68-2023-48cd-9770-d99d5a7301a0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:15:37.564964 master-0 kubenswrapper[33013]: I0313 11:15:37.563900 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02632e68-2023-48cd-9770-d99d5a7301a0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"02632e68-2023-48cd-9770-d99d5a7301a0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:15:37.599612 master-0 kubenswrapper[33013]: I0313 11:15:37.575619 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-config-data\") pod \"nova-api-0\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " pod="openstack/nova-api-0" Mar 13 11:15:37.599612 master-0 kubenswrapper[33013]: I0313 11:15:37.588712 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zvdv\" (UniqueName: \"kubernetes.io/projected/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-kube-api-access-6zvdv\") pod \"nova-api-0\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " pod="openstack/nova-api-0" Mar 13 11:15:37.602261 master-0 kubenswrapper[33013]: I0313 11:15:37.602159 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:15:37.622617 master-0 kubenswrapper[33013]: I0313 11:15:37.609601 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " pod="openstack/nova-api-0" Mar 13 11:15:37.622617 master-0 kubenswrapper[33013]: I0313 11:15:37.619122 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:15:37.637612 master-0 kubenswrapper[33013]: I0313 11:15:37.625118 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 11:15:37.637612 master-0 kubenswrapper[33013]: I0313 11:15:37.629662 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 13 11:15:37.642605 master-0 kubenswrapper[33013]: I0313 11:15:37.641423 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:15:37.683658 master-0 kubenswrapper[33013]: I0313 11:15:37.672226 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbhtv\" (UniqueName: \"kubernetes.io/projected/e221b2cf-5955-4346-a540-24ccb3cbb967-kube-api-access-nbhtv\") pod \"nova-scheduler-0\" (UID: \"e221b2cf-5955-4346-a540-24ccb3cbb967\") " pod="openstack/nova-scheduler-0" Mar 13 11:15:37.683658 master-0 kubenswrapper[33013]: I0313 11:15:37.672683 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e221b2cf-5955-4346-a540-24ccb3cbb967-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e221b2cf-5955-4346-a540-24ccb3cbb967\") " pod="openstack/nova-scheduler-0" Mar 13 11:15:37.683658 master-0 kubenswrapper[33013]: I0313 11:15:37.672758 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e221b2cf-5955-4346-a540-24ccb3cbb967-config-data\") pod \"nova-scheduler-0\" (UID: \"e221b2cf-5955-4346-a540-24ccb3cbb967\") " pod="openstack/nova-scheduler-0" Mar 13 11:15:37.730847 master-0 kubenswrapper[33013]: I0313 11:15:37.724149 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 11:15:37.790925 master-0 kubenswrapper[33013]: I0313 11:15:37.775199 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05827b59-c611-4b48-b2f5-ff87dd94ad6f-config-data\") pod \"nova-metadata-0\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " pod="openstack/nova-metadata-0" Mar 13 11:15:37.790925 master-0 kubenswrapper[33013]: I0313 11:15:37.775304 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e221b2cf-5955-4346-a540-24ccb3cbb967-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e221b2cf-5955-4346-a540-24ccb3cbb967\") " pod="openstack/nova-scheduler-0" Mar 13 11:15:37.790925 master-0 kubenswrapper[33013]: I0313 11:15:37.775393 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e221b2cf-5955-4346-a540-24ccb3cbb967-config-data\") pod \"nova-scheduler-0\" (UID: \"e221b2cf-5955-4346-a540-24ccb3cbb967\") " pod="openstack/nova-scheduler-0" Mar 13 11:15:37.790925 master-0 kubenswrapper[33013]: I0313 11:15:37.775440 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05827b59-c611-4b48-b2f5-ff87dd94ad6f-logs\") pod \"nova-metadata-0\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " pod="openstack/nova-metadata-0" Mar 13 11:15:37.790925 master-0 kubenswrapper[33013]: I0313 11:15:37.775522 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbhtv\" (UniqueName: \"kubernetes.io/projected/e221b2cf-5955-4346-a540-24ccb3cbb967-kube-api-access-nbhtv\") pod \"nova-scheduler-0\" (UID: \"e221b2cf-5955-4346-a540-24ccb3cbb967\") " pod="openstack/nova-scheduler-0" Mar 13 11:15:37.790925 master-0 kubenswrapper[33013]: I0313 11:15:37.775550 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pjzs\" (UniqueName: \"kubernetes.io/projected/05827b59-c611-4b48-b2f5-ff87dd94ad6f-kube-api-access-8pjzs\") pod \"nova-metadata-0\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " pod="openstack/nova-metadata-0" Mar 13 11:15:37.790925 master-0 kubenswrapper[33013]: I0313 11:15:37.775571 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05827b59-c611-4b48-b2f5-ff87dd94ad6f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " pod="openstack/nova-metadata-0" Mar 13 11:15:37.790925 master-0 kubenswrapper[33013]: I0313 11:15:37.788216 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e221b2cf-5955-4346-a540-24ccb3cbb967-config-data\") pod \"nova-scheduler-0\" (UID: \"e221b2cf-5955-4346-a540-24ccb3cbb967\") " pod="openstack/nova-scheduler-0" Mar 13 11:15:37.790925 master-0 kubenswrapper[33013]: I0313 11:15:37.788310 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9c9ccb7c-qhldm"] Mar 13 11:15:37.790925 master-0 kubenswrapper[33013]: I0313 11:15:37.790779 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:37.808349 master-0 kubenswrapper[33013]: I0313 11:15:37.806512 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e221b2cf-5955-4346-a540-24ccb3cbb967-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e221b2cf-5955-4346-a540-24ccb3cbb967\") " pod="openstack/nova-scheduler-0" Mar 13 11:15:37.851608 master-0 kubenswrapper[33013]: I0313 11:15:37.848501 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9c9ccb7c-qhldm"] Mar 13 11:15:37.878788 master-0 kubenswrapper[33013]: I0313 11:15:37.878184 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-config\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:37.878788 master-0 kubenswrapper[33013]: I0313 11:15:37.878303 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-dns-svc\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:37.878788 master-0 kubenswrapper[33013]: I0313 11:15:37.878339 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6x8\" (UniqueName: \"kubernetes.io/projected/16b462d2-f716-400e-9ff5-51f843fbc2e9-kube-api-access-fv6x8\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:37.878788 master-0 kubenswrapper[33013]: I0313 11:15:37.878465 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05827b59-c611-4b48-b2f5-ff87dd94ad6f-logs\") pod \"nova-metadata-0\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " pod="openstack/nova-metadata-0" Mar 13 11:15:37.878788 master-0 kubenswrapper[33013]: I0313 11:15:37.878527 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:37.940294 master-0 kubenswrapper[33013]: I0313 11:15:37.935289 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:37.940294 master-0 kubenswrapper[33013]: I0313 11:15:37.935395 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pjzs\" (UniqueName: \"kubernetes.io/projected/05827b59-c611-4b48-b2f5-ff87dd94ad6f-kube-api-access-8pjzs\") pod \"nova-metadata-0\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " pod="openstack/nova-metadata-0" Mar 13 11:15:37.940294 master-0 kubenswrapper[33013]: I0313 11:15:37.935434 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05827b59-c611-4b48-b2f5-ff87dd94ad6f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " pod="openstack/nova-metadata-0" Mar 13 11:15:37.940294 master-0 kubenswrapper[33013]: I0313 11:15:37.935483 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:37.940294 master-0 kubenswrapper[33013]: I0313 11:15:37.935729 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05827b59-c611-4b48-b2f5-ff87dd94ad6f-config-data\") pod \"nova-metadata-0\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " pod="openstack/nova-metadata-0" Mar 13 11:15:37.960753 master-0 kubenswrapper[33013]: I0313 11:15:37.953976 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05827b59-c611-4b48-b2f5-ff87dd94ad6f-logs\") pod \"nova-metadata-0\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " pod="openstack/nova-metadata-0" Mar 13 11:15:37.976219 master-0 kubenswrapper[33013]: I0313 11:15:37.974912 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05827b59-c611-4b48-b2f5-ff87dd94ad6f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " pod="openstack/nova-metadata-0" Mar 13 11:15:37.981286 master-0 kubenswrapper[33013]: I0313 11:15:37.981247 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6cck\" (UniqueName: \"kubernetes.io/projected/02632e68-2023-48cd-9770-d99d5a7301a0-kube-api-access-k6cck\") pod \"nova-cell1-novncproxy-0\" (UID: \"02632e68-2023-48cd-9770-d99d5a7301a0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:15:37.998838 master-0 kubenswrapper[33013]: I0313 11:15:37.998658 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05827b59-c611-4b48-b2f5-ff87dd94ad6f-config-data\") pod \"nova-metadata-0\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " pod="openstack/nova-metadata-0" Mar 13 11:15:38.046757 master-0 kubenswrapper[33013]: I0313 11:15:38.045056 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:38.046948 master-0 kubenswrapper[33013]: I0313 11:15:38.046914 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:38.047221 master-0 kubenswrapper[33013]: I0313 11:15:38.047200 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-config\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:38.047275 master-0 kubenswrapper[33013]: I0313 11:15:38.047265 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-dns-svc\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:38.047313 master-0 kubenswrapper[33013]: I0313 11:15:38.047289 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv6x8\" (UniqueName: \"kubernetes.io/projected/16b462d2-f716-400e-9ff5-51f843fbc2e9-kube-api-access-fv6x8\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:38.047387 master-0 kubenswrapper[33013]: I0313 11:15:38.047358 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:38.047655 master-0 kubenswrapper[33013]: I0313 11:15:38.047633 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:38.047972 master-0 kubenswrapper[33013]: I0313 11:15:38.047948 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-config\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:38.048557 master-0 kubenswrapper[33013]: I0313 11:15:38.048528 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:38.048698 master-0 kubenswrapper[33013]: I0313 11:15:38.048533 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:38.049404 master-0 kubenswrapper[33013]: I0313 11:15:38.049372 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-dns-svc\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:38.088631 master-0 kubenswrapper[33013]: I0313 11:15:38.083838 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pjzs\" (UniqueName: \"kubernetes.io/projected/05827b59-c611-4b48-b2f5-ff87dd94ad6f-kube-api-access-8pjzs\") pod \"nova-metadata-0\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " pod="openstack/nova-metadata-0" Mar 13 11:15:38.115301 master-0 kubenswrapper[33013]: I0313 11:15:38.112914 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv6x8\" (UniqueName: \"kubernetes.io/projected/16b462d2-f716-400e-9ff5-51f843fbc2e9-kube-api-access-fv6x8\") pod \"dnsmasq-dns-5c9c9ccb7c-qhldm\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:38.138613 master-0 kubenswrapper[33013]: I0313 11:15:38.120171 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:15:38.138613 master-0 kubenswrapper[33013]: I0313 11:15:38.137069 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 11:15:38.156627 master-0 kubenswrapper[33013]: I0313 11:15:38.144463 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbhtv\" (UniqueName: \"kubernetes.io/projected/e221b2cf-5955-4346-a540-24ccb3cbb967-kube-api-access-nbhtv\") pod \"nova-scheduler-0\" (UID: \"e221b2cf-5955-4346-a540-24ccb3cbb967\") " pod="openstack/nova-scheduler-0" Mar 13 11:15:38.265158 master-0 kubenswrapper[33013]: I0313 11:15:38.265101 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-896qh"] Mar 13 11:15:38.266768 master-0 kubenswrapper[33013]: I0313 11:15:38.266410 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:38.307994 master-0 kubenswrapper[33013]: I0313 11:15:38.307907 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 13 11:15:38.387767 master-0 kubenswrapper[33013]: I0313 11:15:38.387698 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 11:15:38.540536 master-0 kubenswrapper[33013]: I0313 11:15:38.540456 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:15:38.612612 master-0 kubenswrapper[33013]: W0313 11:15:38.609765 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8de14a46_90d5_4fc8_9823_ba42c7ab4c15.slice/crio-a5a9423705e21cfb807979aab58b8ca6d1ec7140c35d1aca8b721c935c274815 WatchSource:0}: Error finding container a5a9423705e21cfb807979aab58b8ca6d1ec7140c35d1aca8b721c935c274815: Status 404 returned error can't find the container with id a5a9423705e21cfb807979aab58b8ca6d1ec7140c35d1aca8b721c935c274815 Mar 13 11:15:38.788734 master-0 kubenswrapper[33013]: I0313 11:15:38.786997 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-896qh" event={"ID":"3991862a-ccb1-46f5-bdcc-74d3926df07a","Type":"ContainerStarted","Data":"5c37fd4a0223a0cf655982d9e52f27f6b3ea703fe5c07944ea17bdb4964fd4b8"} Mar 13 11:15:38.788734 master-0 kubenswrapper[33013]: I0313 11:15:38.787074 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-896qh" event={"ID":"3991862a-ccb1-46f5-bdcc-74d3926df07a","Type":"ContainerStarted","Data":"cd346b33d203a18341f290afef1683e171d00b3a42298b060759dff2004338c8"} Mar 13 11:15:38.806334 master-0 kubenswrapper[33013]: I0313 11:15:38.806261 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8de14a46-90d5-4fc8-9823-ba42c7ab4c15","Type":"ContainerStarted","Data":"a5a9423705e21cfb807979aab58b8ca6d1ec7140c35d1aca8b721c935c274815"} Mar 13 11:15:38.827889 master-0 kubenswrapper[33013]: I0313 11:15:38.827694 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"1bb040ad-c64d-465b-91ed-e961db81a52d","Type":"ContainerStarted","Data":"2766a34ddd59666f23bf6981c7d7ec1644c003e8563c9e12a5fe143255f5a2eb"} Mar 13 11:15:39.217861 master-0 kubenswrapper[33013]: I0313 11:15:39.217529 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-896qh" podStartSLOduration=3.217502597 podStartE2EDuration="3.217502597s" podCreationTimestamp="2026-03-13 11:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:15:38.813554168 +0000 UTC m=+1122.289507517" watchObservedRunningTime="2026-03-13 11:15:39.217502597 +0000 UTC m=+1122.693455946" Mar 13 11:15:39.222503 master-0 kubenswrapper[33013]: I0313 11:15:39.221740 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:15:39.251220 master-0 kubenswrapper[33013]: I0313 11:15:39.251129 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 11:15:39.270650 master-0 kubenswrapper[33013]: I0313 11:15:39.270578 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:15:39.301346 master-0 kubenswrapper[33013]: I0313 11:15:39.301102 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9c9ccb7c-qhldm"] Mar 13 11:15:39.394201 master-0 kubenswrapper[33013]: I0313 11:15:39.394133 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8xndq"] Mar 13 11:15:39.395990 master-0 kubenswrapper[33013]: I0313 11:15:39.395766 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:39.400973 master-0 kubenswrapper[33013]: I0313 11:15:39.399760 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 13 11:15:39.400973 master-0 kubenswrapper[33013]: I0313 11:15:39.400118 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 13 11:15:39.429358 master-0 kubenswrapper[33013]: I0313 11:15:39.428073 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8xndq"] Mar 13 11:15:39.465310 master-0 kubenswrapper[33013]: I0313 11:15:39.463951 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnl72\" (UniqueName: \"kubernetes.io/projected/125890f6-344c-4e04-ad6f-b721846fc632-kube-api-access-lnl72\") pod \"nova-cell1-conductor-db-sync-8xndq\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:39.465310 master-0 kubenswrapper[33013]: I0313 11:15:39.464152 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-config-data\") pod \"nova-cell1-conductor-db-sync-8xndq\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:39.465310 master-0 kubenswrapper[33013]: I0313 11:15:39.464327 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8xndq\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:39.465310 master-0 kubenswrapper[33013]: I0313 11:15:39.464625 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-scripts\") pod \"nova-cell1-conductor-db-sync-8xndq\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:39.569697 master-0 kubenswrapper[33013]: I0313 11:15:39.569619 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8xndq\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:39.570351 master-0 kubenswrapper[33013]: I0313 11:15:39.569937 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-scripts\") pod \"nova-cell1-conductor-db-sync-8xndq\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:39.570351 master-0 kubenswrapper[33013]: I0313 11:15:39.570310 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnl72\" (UniqueName: \"kubernetes.io/projected/125890f6-344c-4e04-ad6f-b721846fc632-kube-api-access-lnl72\") pod \"nova-cell1-conductor-db-sync-8xndq\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:39.570423 master-0 kubenswrapper[33013]: I0313 11:15:39.570401 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-config-data\") pod \"nova-cell1-conductor-db-sync-8xndq\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:39.579638 master-0 kubenswrapper[33013]: I0313 11:15:39.577361 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-config-data\") pod \"nova-cell1-conductor-db-sync-8xndq\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:39.588700 master-0 kubenswrapper[33013]: I0313 11:15:39.587750 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8xndq\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:39.599219 master-0 kubenswrapper[33013]: I0313 11:15:39.598465 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnl72\" (UniqueName: \"kubernetes.io/projected/125890f6-344c-4e04-ad6f-b721846fc632-kube-api-access-lnl72\") pod \"nova-cell1-conductor-db-sync-8xndq\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:39.603381 master-0 kubenswrapper[33013]: I0313 11:15:39.603325 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-scripts\") pod \"nova-cell1-conductor-db-sync-8xndq\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:39.725610 master-0 kubenswrapper[33013]: I0313 11:15:39.725233 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:39.883271 master-0 kubenswrapper[33013]: I0313 11:15:39.883221 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"02632e68-2023-48cd-9770-d99d5a7301a0","Type":"ContainerStarted","Data":"ed124cabaf87d263ecf7dec3b6692b4c30aa96d725b60cc2ccd89456861d1f11"} Mar 13 11:15:39.885763 master-0 kubenswrapper[33013]: I0313 11:15:39.885711 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05827b59-c611-4b48-b2f5-ff87dd94ad6f","Type":"ContainerStarted","Data":"444cd2959970bc91be6cd9bb7bcb9bd00e70237491a31e5dcdb94a6972b62650"} Mar 13 11:15:39.891545 master-0 kubenswrapper[33013]: I0313 11:15:39.891444 33013 generic.go:334] "Generic (PLEG): container finished" podID="16b462d2-f716-400e-9ff5-51f843fbc2e9" containerID="cb92b31e4390fda0c0bd7b89bbc9e98d4558692cc96f087be1ab3043594730d1" exitCode=0 Mar 13 11:15:39.891882 master-0 kubenswrapper[33013]: I0313 11:15:39.891553 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" event={"ID":"16b462d2-f716-400e-9ff5-51f843fbc2e9","Type":"ContainerDied","Data":"cb92b31e4390fda0c0bd7b89bbc9e98d4558692cc96f087be1ab3043594730d1"} Mar 13 11:15:39.891940 master-0 kubenswrapper[33013]: I0313 11:15:39.891904 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" event={"ID":"16b462d2-f716-400e-9ff5-51f843fbc2e9","Type":"ContainerStarted","Data":"8f9c78e39adffaba31b028bb9f108fbcc7a3bd7f72e92ea4a274b7b81638157d"} Mar 13 11:15:39.896718 master-0 kubenswrapper[33013]: I0313 11:15:39.896665 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e221b2cf-5955-4346-a540-24ccb3cbb967","Type":"ContainerStarted","Data":"a62e1a2a952327d379f3959f34c40cd12d11446e018b3199c073c4569ff6dcd1"} Mar 13 11:15:40.317723 master-0 kubenswrapper[33013]: I0313 11:15:40.317298 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8xndq"] Mar 13 11:15:40.919919 master-0 kubenswrapper[33013]: I0313 11:15:40.919288 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" event={"ID":"16b462d2-f716-400e-9ff5-51f843fbc2e9","Type":"ContainerStarted","Data":"599f42b5b1bc146c48ece4cf639cfa898ee1803a53d5dfa04f6f3511f0df2dd1"} Mar 13 11:15:40.920956 master-0 kubenswrapper[33013]: I0313 11:15:40.920914 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:40.923909 master-0 kubenswrapper[33013]: I0313 11:15:40.923803 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8xndq" event={"ID":"125890f6-344c-4e04-ad6f-b721846fc632","Type":"ContainerStarted","Data":"ef6a074c8db26aa880d2de4041ab677ea20977c0fc065c73c55c2b8d147fe4c5"} Mar 13 11:15:40.923909 master-0 kubenswrapper[33013]: I0313 11:15:40.923870 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8xndq" event={"ID":"125890f6-344c-4e04-ad6f-b721846fc632","Type":"ContainerStarted","Data":"e3b42b1488e93076e797988f7e4e1f715f1156524f2b85fe6bd46513e3d93519"} Mar 13 11:15:41.313283 master-0 kubenswrapper[33013]: I0313 11:15:41.313125 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" podStartSLOduration=4.31309217 podStartE2EDuration="4.31309217s" podCreationTimestamp="2026-03-13 11:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:15:41.222963551 +0000 UTC m=+1124.698916900" watchObservedRunningTime="2026-03-13 11:15:41.31309217 +0000 UTC m=+1124.789045519" Mar 13 11:15:41.321091 master-0 kubenswrapper[33013]: I0313 11:15:41.321020 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-8xndq" podStartSLOduration=2.321007466 podStartE2EDuration="2.321007466s" podCreationTimestamp="2026-03-13 11:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:15:41.260175166 +0000 UTC m=+1124.736128525" watchObservedRunningTime="2026-03-13 11:15:41.321007466 +0000 UTC m=+1124.796960815" Mar 13 11:15:41.717744 master-0 kubenswrapper[33013]: I0313 11:15:41.710478 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 11:15:41.730541 master-0 kubenswrapper[33013]: I0313 11:15:41.730435 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:15:45.058743 master-0 kubenswrapper[33013]: I0313 11:15:45.058643 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8de14a46-90d5-4fc8-9823-ba42c7ab4c15","Type":"ContainerStarted","Data":"40bac6011bbfc43c3123a59a3a36cfef05001daf7212c7c00cb6eeca338daf6b"} Mar 13 11:15:45.058743 master-0 kubenswrapper[33013]: I0313 11:15:45.058716 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8de14a46-90d5-4fc8-9823-ba42c7ab4c15","Type":"ContainerStarted","Data":"1ad4fb4e3c978f085749b386e86d8ea448072a6f1ab1799db650dad9f8cfe917"} Mar 13 11:15:45.087247 master-0 kubenswrapper[33013]: I0313 11:15:45.087004 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e221b2cf-5955-4346-a540-24ccb3cbb967","Type":"ContainerStarted","Data":"1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5"} Mar 13 11:15:45.096579 master-0 kubenswrapper[33013]: I0313 11:15:45.096508 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"02632e68-2023-48cd-9770-d99d5a7301a0","Type":"ContainerStarted","Data":"f8ec1ab89021b98a4cd613391869c4c2c762549b26eb641c7b6bc6c703a01ed3"} Mar 13 11:15:45.096895 master-0 kubenswrapper[33013]: I0313 11:15:45.096731 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="02632e68-2023-48cd-9770-d99d5a7301a0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f8ec1ab89021b98a4cd613391869c4c2c762549b26eb641c7b6bc6c703a01ed3" gracePeriod=30 Mar 13 11:15:45.108858 master-0 kubenswrapper[33013]: I0313 11:15:45.108783 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05827b59-c611-4b48-b2f5-ff87dd94ad6f","Type":"ContainerStarted","Data":"93f426cebcd9c8d999da94a774ef650c22797daadacaf9ad490b0e24b3f21e4b"} Mar 13 11:15:45.108858 master-0 kubenswrapper[33013]: I0313 11:15:45.108847 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05827b59-c611-4b48-b2f5-ff87dd94ad6f","Type":"ContainerStarted","Data":"6d2380721f08b36133925a65ed5da4fa642eb82475be6a878b2a2b72a25439b9"} Mar 13 11:15:45.109262 master-0 kubenswrapper[33013]: I0313 11:15:45.109000 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="05827b59-c611-4b48-b2f5-ff87dd94ad6f" containerName="nova-metadata-log" containerID="cri-o://6d2380721f08b36133925a65ed5da4fa642eb82475be6a878b2a2b72a25439b9" gracePeriod=30 Mar 13 11:15:45.109386 master-0 kubenswrapper[33013]: I0313 11:15:45.109347 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="05827b59-c611-4b48-b2f5-ff87dd94ad6f" containerName="nova-metadata-metadata" containerID="cri-o://93f426cebcd9c8d999da94a774ef650c22797daadacaf9ad490b0e24b3f21e4b" gracePeriod=30 Mar 13 11:15:45.112919 master-0 kubenswrapper[33013]: I0313 11:15:45.111223 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.692784735 podStartE2EDuration="8.110799337s" podCreationTimestamp="2026-03-13 11:15:37 +0000 UTC" firstStartedPulling="2026-03-13 11:15:38.619162816 +0000 UTC m=+1122.095116165" lastFinishedPulling="2026-03-13 11:15:44.037177418 +0000 UTC m=+1127.513130767" observedRunningTime="2026-03-13 11:15:45.088300604 +0000 UTC m=+1128.564253973" watchObservedRunningTime="2026-03-13 11:15:45.110799337 +0000 UTC m=+1128.586752686" Mar 13 11:15:45.131003 master-0 kubenswrapper[33013]: I0313 11:15:45.130150 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.388359309 podStartE2EDuration="8.13012698s" podCreationTimestamp="2026-03-13 11:15:37 +0000 UTC" firstStartedPulling="2026-03-13 11:15:39.296764225 +0000 UTC m=+1122.772717574" lastFinishedPulling="2026-03-13 11:15:44.038531896 +0000 UTC m=+1127.514485245" observedRunningTime="2026-03-13 11:15:45.12172924 +0000 UTC m=+1128.597682589" watchObservedRunningTime="2026-03-13 11:15:45.13012698 +0000 UTC m=+1128.606080329" Mar 13 11:15:45.149785 master-0 kubenswrapper[33013]: I0313 11:15:45.149314 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.326122179 podStartE2EDuration="8.149285629s" podCreationTimestamp="2026-03-13 11:15:37 +0000 UTC" firstStartedPulling="2026-03-13 11:15:39.221062849 +0000 UTC m=+1122.697016188" lastFinishedPulling="2026-03-13 11:15:44.044226279 +0000 UTC m=+1127.520179638" observedRunningTime="2026-03-13 11:15:45.142381181 +0000 UTC m=+1128.618334530" watchObservedRunningTime="2026-03-13 11:15:45.149285629 +0000 UTC m=+1128.625238978" Mar 13 11:15:45.191336 master-0 kubenswrapper[33013]: I0313 11:15:45.191055 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.374356678 podStartE2EDuration="8.191032023s" podCreationTimestamp="2026-03-13 11:15:37 +0000 UTC" firstStartedPulling="2026-03-13 11:15:39.220782881 +0000 UTC m=+1122.696736230" lastFinishedPulling="2026-03-13 11:15:44.037458226 +0000 UTC m=+1127.513411575" observedRunningTime="2026-03-13 11:15:45.176613911 +0000 UTC m=+1128.652567280" watchObservedRunningTime="2026-03-13 11:15:45.191032023 +0000 UTC m=+1128.666985372" Mar 13 11:15:46.140117 master-0 kubenswrapper[33013]: I0313 11:15:46.140052 33013 generic.go:334] "Generic (PLEG): container finished" podID="05827b59-c611-4b48-b2f5-ff87dd94ad6f" containerID="93f426cebcd9c8d999da94a774ef650c22797daadacaf9ad490b0e24b3f21e4b" exitCode=0 Mar 13 11:15:46.140117 master-0 kubenswrapper[33013]: I0313 11:15:46.140099 33013 generic.go:334] "Generic (PLEG): container finished" podID="05827b59-c611-4b48-b2f5-ff87dd94ad6f" containerID="6d2380721f08b36133925a65ed5da4fa642eb82475be6a878b2a2b72a25439b9" exitCode=143 Mar 13 11:15:46.141573 master-0 kubenswrapper[33013]: I0313 11:15:46.141536 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05827b59-c611-4b48-b2f5-ff87dd94ad6f","Type":"ContainerDied","Data":"93f426cebcd9c8d999da94a774ef650c22797daadacaf9ad490b0e24b3f21e4b"} Mar 13 11:15:46.141674 master-0 kubenswrapper[33013]: I0313 11:15:46.141599 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05827b59-c611-4b48-b2f5-ff87dd94ad6f","Type":"ContainerDied","Data":"6d2380721f08b36133925a65ed5da4fa642eb82475be6a878b2a2b72a25439b9"} Mar 13 11:15:46.141674 master-0 kubenswrapper[33013]: I0313 11:15:46.141624 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"05827b59-c611-4b48-b2f5-ff87dd94ad6f","Type":"ContainerDied","Data":"444cd2959970bc91be6cd9bb7bcb9bd00e70237491a31e5dcdb94a6972b62650"} Mar 13 11:15:46.141674 master-0 kubenswrapper[33013]: I0313 11:15:46.141637 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="444cd2959970bc91be6cd9bb7bcb9bd00e70237491a31e5dcdb94a6972b62650" Mar 13 11:15:46.151229 master-0 kubenswrapper[33013]: I0313 11:15:46.151190 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 11:15:46.270764 master-0 kubenswrapper[33013]: I0313 11:15:46.270695 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05827b59-c611-4b48-b2f5-ff87dd94ad6f-combined-ca-bundle\") pod \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " Mar 13 11:15:46.271224 master-0 kubenswrapper[33013]: I0313 11:15:46.271204 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pjzs\" (UniqueName: \"kubernetes.io/projected/05827b59-c611-4b48-b2f5-ff87dd94ad6f-kube-api-access-8pjzs\") pod \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " Mar 13 11:15:46.271548 master-0 kubenswrapper[33013]: I0313 11:15:46.271532 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05827b59-c611-4b48-b2f5-ff87dd94ad6f-config-data\") pod \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " Mar 13 11:15:46.271785 master-0 kubenswrapper[33013]: I0313 11:15:46.271760 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05827b59-c611-4b48-b2f5-ff87dd94ad6f-logs\") pod \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\" (UID: \"05827b59-c611-4b48-b2f5-ff87dd94ad6f\") " Mar 13 11:15:46.273548 master-0 kubenswrapper[33013]: I0313 11:15:46.273387 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05827b59-c611-4b48-b2f5-ff87dd94ad6f-logs" (OuterVolumeSpecName: "logs") pod "05827b59-c611-4b48-b2f5-ff87dd94ad6f" (UID: "05827b59-c611-4b48-b2f5-ff87dd94ad6f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:15:46.276118 master-0 kubenswrapper[33013]: I0313 11:15:46.276079 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05827b59-c611-4b48-b2f5-ff87dd94ad6f-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:46.279837 master-0 kubenswrapper[33013]: I0313 11:15:46.279759 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05827b59-c611-4b48-b2f5-ff87dd94ad6f-kube-api-access-8pjzs" (OuterVolumeSpecName: "kube-api-access-8pjzs") pod "05827b59-c611-4b48-b2f5-ff87dd94ad6f" (UID: "05827b59-c611-4b48-b2f5-ff87dd94ad6f"). InnerVolumeSpecName "kube-api-access-8pjzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:15:46.329712 master-0 kubenswrapper[33013]: I0313 11:15:46.329572 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05827b59-c611-4b48-b2f5-ff87dd94ad6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05827b59-c611-4b48-b2f5-ff87dd94ad6f" (UID: "05827b59-c611-4b48-b2f5-ff87dd94ad6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:46.349722 master-0 kubenswrapper[33013]: I0313 11:15:46.349646 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05827b59-c611-4b48-b2f5-ff87dd94ad6f-config-data" (OuterVolumeSpecName: "config-data") pod "05827b59-c611-4b48-b2f5-ff87dd94ad6f" (UID: "05827b59-c611-4b48-b2f5-ff87dd94ad6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:46.378749 master-0 kubenswrapper[33013]: I0313 11:15:46.378664 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05827b59-c611-4b48-b2f5-ff87dd94ad6f-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:46.378749 master-0 kubenswrapper[33013]: I0313 11:15:46.378712 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05827b59-c611-4b48-b2f5-ff87dd94ad6f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:46.378749 master-0 kubenswrapper[33013]: I0313 11:15:46.378724 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pjzs\" (UniqueName: \"kubernetes.io/projected/05827b59-c611-4b48-b2f5-ff87dd94ad6f-kube-api-access-8pjzs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:47.168908 master-0 kubenswrapper[33013]: I0313 11:15:47.168409 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 11:15:47.219881 master-0 kubenswrapper[33013]: I0313 11:15:47.219829 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:15:47.257955 master-0 kubenswrapper[33013]: I0313 11:15:47.257880 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:15:47.351837 master-0 kubenswrapper[33013]: I0313 11:15:47.351762 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:15:47.356228 master-0 kubenswrapper[33013]: E0313 11:15:47.353297 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05827b59-c611-4b48-b2f5-ff87dd94ad6f" containerName="nova-metadata-log" Mar 13 11:15:47.356228 master-0 kubenswrapper[33013]: I0313 11:15:47.353325 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="05827b59-c611-4b48-b2f5-ff87dd94ad6f" containerName="nova-metadata-log" Mar 13 11:15:47.356228 master-0 kubenswrapper[33013]: E0313 11:15:47.353405 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05827b59-c611-4b48-b2f5-ff87dd94ad6f" containerName="nova-metadata-metadata" Mar 13 11:15:47.356228 master-0 kubenswrapper[33013]: I0313 11:15:47.353414 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="05827b59-c611-4b48-b2f5-ff87dd94ad6f" containerName="nova-metadata-metadata" Mar 13 11:15:47.356228 master-0 kubenswrapper[33013]: I0313 11:15:47.355653 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="05827b59-c611-4b48-b2f5-ff87dd94ad6f" containerName="nova-metadata-metadata" Mar 13 11:15:47.356228 master-0 kubenswrapper[33013]: I0313 11:15:47.355706 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="05827b59-c611-4b48-b2f5-ff87dd94ad6f" containerName="nova-metadata-log" Mar 13 11:15:47.362460 master-0 kubenswrapper[33013]: I0313 11:15:47.362391 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 11:15:47.368737 master-0 kubenswrapper[33013]: I0313 11:15:47.366613 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 13 11:15:47.374371 master-0 kubenswrapper[33013]: I0313 11:15:47.374324 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 13 11:15:47.419724 master-0 kubenswrapper[33013]: I0313 11:15:47.419149 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:15:47.524167 master-0 kubenswrapper[33013]: I0313 11:15:47.524100 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.524445 master-0 kubenswrapper[33013]: I0313 11:15:47.524208 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.524445 master-0 kubenswrapper[33013]: I0313 11:15:47.524283 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2ksw\" (UniqueName: \"kubernetes.io/projected/d753f204-26cd-4edc-944a-724f848ed71b-kube-api-access-m2ksw\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.524445 master-0 kubenswrapper[33013]: I0313 11:15:47.524317 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d753f204-26cd-4edc-944a-724f848ed71b-logs\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.524445 master-0 kubenswrapper[33013]: I0313 11:15:47.524353 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-config-data\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.628039 master-0 kubenswrapper[33013]: I0313 11:15:47.627965 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.628295 master-0 kubenswrapper[33013]: I0313 11:15:47.628072 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.628295 master-0 kubenswrapper[33013]: I0313 11:15:47.628156 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2ksw\" (UniqueName: \"kubernetes.io/projected/d753f204-26cd-4edc-944a-724f848ed71b-kube-api-access-m2ksw\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.628572 master-0 kubenswrapper[33013]: I0313 11:15:47.628521 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d753f204-26cd-4edc-944a-724f848ed71b-logs\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.628772 master-0 kubenswrapper[33013]: I0313 11:15:47.628750 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-config-data\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.629256 master-0 kubenswrapper[33013]: I0313 11:15:47.629167 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d753f204-26cd-4edc-944a-724f848ed71b-logs\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.634357 master-0 kubenswrapper[33013]: I0313 11:15:47.634274 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.635347 master-0 kubenswrapper[33013]: I0313 11:15:47.634505 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.645485 master-0 kubenswrapper[33013]: I0313 11:15:47.645440 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-config-data\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.694529 master-0 kubenswrapper[33013]: I0313 11:15:47.694413 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2ksw\" (UniqueName: \"kubernetes.io/projected/d753f204-26cd-4edc-944a-724f848ed71b-kube-api-access-m2ksw\") pod \"nova-metadata-0\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " pod="openstack/nova-metadata-0" Mar 13 11:15:47.725714 master-0 kubenswrapper[33013]: I0313 11:15:47.725656 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 11:15:47.725714 master-0 kubenswrapper[33013]: I0313 11:15:47.725725 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 11:15:47.726967 master-0 kubenswrapper[33013]: I0313 11:15:47.726925 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 11:15:48.124946 master-0 kubenswrapper[33013]: I0313 11:15:48.123996 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:15:48.274450 master-0 kubenswrapper[33013]: I0313 11:15:48.273743 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:15:48.396199 master-0 kubenswrapper[33013]: I0313 11:15:48.393215 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 13 11:15:48.396199 master-0 kubenswrapper[33013]: I0313 11:15:48.393286 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 13 11:15:48.417922 master-0 kubenswrapper[33013]: I0313 11:15:48.415233 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bfb994cb5-6ldqw"] Mar 13 11:15:48.426667 master-0 kubenswrapper[33013]: I0313 11:15:48.420857 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" podUID="8f3b3913-1778-4d02-8259-2968de468f92" containerName="dnsmasq-dns" containerID="cri-o://8d930d2d6a67ce74d20a000616e1e1e4c30d23c9a25b7f8a5840d0ae8062ce5e" gracePeriod=10 Mar 13 11:15:48.533802 master-0 kubenswrapper[33013]: I0313 11:15:48.531777 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 13 11:15:48.746623 master-0 kubenswrapper[33013]: I0313 11:15:48.743364 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05827b59-c611-4b48-b2f5-ff87dd94ad6f" path="/var/lib/kubelet/pods/05827b59-c611-4b48-b2f5-ff87dd94ad6f/volumes" Mar 13 11:15:48.807932 master-0 kubenswrapper[33013]: I0313 11:15:48.807851 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8de14a46-90d5-4fc8-9823-ba42c7ab4c15" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.2:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:15:48.808304 master-0 kubenswrapper[33013]: I0313 11:15:48.808147 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8de14a46-90d5-4fc8-9823-ba42c7ab4c15" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.2:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:15:49.274660 master-0 kubenswrapper[33013]: I0313 11:15:49.268274 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 13 11:15:50.080615 master-0 kubenswrapper[33013]: I0313 11:15:50.080067 33013 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" podUID="8f3b3913-1778-4d02-8259-2968de468f92" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.249:5353: connect: connection refused" Mar 13 11:15:55.254636 master-0 kubenswrapper[33013]: I0313 11:15:55.253954 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:15:55.317003 master-0 kubenswrapper[33013]: I0313 11:15:55.316940 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-ovsdbserver-sb\") pod \"8f3b3913-1778-4d02-8259-2968de468f92\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " Mar 13 11:15:55.317299 master-0 kubenswrapper[33013]: I0313 11:15:55.317061 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-dns-swift-storage-0\") pod \"8f3b3913-1778-4d02-8259-2968de468f92\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " Mar 13 11:15:55.317299 master-0 kubenswrapper[33013]: I0313 11:15:55.317187 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-config\") pod \"8f3b3913-1778-4d02-8259-2968de468f92\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " Mar 13 11:15:55.317380 master-0 kubenswrapper[33013]: I0313 11:15:55.317324 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jrnj\" (UniqueName: \"kubernetes.io/projected/8f3b3913-1778-4d02-8259-2968de468f92-kube-api-access-5jrnj\") pod \"8f3b3913-1778-4d02-8259-2968de468f92\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " Mar 13 11:15:55.317380 master-0 kubenswrapper[33013]: I0313 11:15:55.317352 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-dns-svc\") pod \"8f3b3913-1778-4d02-8259-2968de468f92\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " Mar 13 11:15:55.319533 master-0 kubenswrapper[33013]: I0313 11:15:55.317432 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-ovsdbserver-nb\") pod \"8f3b3913-1778-4d02-8259-2968de468f92\" (UID: \"8f3b3913-1778-4d02-8259-2968de468f92\") " Mar 13 11:15:55.334822 master-0 kubenswrapper[33013]: I0313 11:15:55.334754 33013 generic.go:334] "Generic (PLEG): container finished" podID="3991862a-ccb1-46f5-bdcc-74d3926df07a" containerID="5c37fd4a0223a0cf655982d9e52f27f6b3ea703fe5c07944ea17bdb4964fd4b8" exitCode=0 Mar 13 11:15:55.335071 master-0 kubenswrapper[33013]: I0313 11:15:55.334854 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-896qh" event={"ID":"3991862a-ccb1-46f5-bdcc-74d3926df07a","Type":"ContainerDied","Data":"5c37fd4a0223a0cf655982d9e52f27f6b3ea703fe5c07944ea17bdb4964fd4b8"} Mar 13 11:15:55.347764 master-0 kubenswrapper[33013]: I0313 11:15:55.343828 33013 generic.go:334] "Generic (PLEG): container finished" podID="125890f6-344c-4e04-ad6f-b721846fc632" containerID="ef6a074c8db26aa880d2de4041ab677ea20977c0fc065c73c55c2b8d147fe4c5" exitCode=0 Mar 13 11:15:55.347764 master-0 kubenswrapper[33013]: I0313 11:15:55.343927 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8xndq" event={"ID":"125890f6-344c-4e04-ad6f-b721846fc632","Type":"ContainerDied","Data":"ef6a074c8db26aa880d2de4041ab677ea20977c0fc065c73c55c2b8d147fe4c5"} Mar 13 11:15:55.347764 master-0 kubenswrapper[33013]: I0313 11:15:55.346088 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f3b3913-1778-4d02-8259-2968de468f92-kube-api-access-5jrnj" (OuterVolumeSpecName: "kube-api-access-5jrnj") pod "8f3b3913-1778-4d02-8259-2968de468f92" (UID: "8f3b3913-1778-4d02-8259-2968de468f92"). InnerVolumeSpecName "kube-api-access-5jrnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:15:55.348367 master-0 kubenswrapper[33013]: I0313 11:15:55.348306 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"1bb040ad-c64d-465b-91ed-e961db81a52d","Type":"ContainerStarted","Data":"d3cc345cc1ea7303437d79a630a8fd370084ac6a08135f71a71b01cd94b31d88"} Mar 13 11:15:55.348720 master-0 kubenswrapper[33013]: I0313 11:15:55.348683 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 11:15:55.352334 master-0 kubenswrapper[33013]: I0313 11:15:55.352190 33013 generic.go:334] "Generic (PLEG): container finished" podID="8f3b3913-1778-4d02-8259-2968de468f92" containerID="8d930d2d6a67ce74d20a000616e1e1e4c30d23c9a25b7f8a5840d0ae8062ce5e" exitCode=0 Mar 13 11:15:55.352674 master-0 kubenswrapper[33013]: I0313 11:15:55.352342 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" event={"ID":"8f3b3913-1778-4d02-8259-2968de468f92","Type":"ContainerDied","Data":"8d930d2d6a67ce74d20a000616e1e1e4c30d23c9a25b7f8a5840d0ae8062ce5e"} Mar 13 11:15:55.352674 master-0 kubenswrapper[33013]: I0313 11:15:55.352379 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" event={"ID":"8f3b3913-1778-4d02-8259-2968de468f92","Type":"ContainerDied","Data":"89c684586b4e7d54732495371ec877e93d265d231d2a36877fa52ffd9515356a"} Mar 13 11:15:55.352674 master-0 kubenswrapper[33013]: I0313 11:15:55.352404 33013 scope.go:117] "RemoveContainer" containerID="8d930d2d6a67ce74d20a000616e1e1e4c30d23c9a25b7f8a5840d0ae8062ce5e" Mar 13 11:15:55.352674 master-0 kubenswrapper[33013]: I0313 11:15:55.352556 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" Mar 13 11:15:55.362616 master-0 kubenswrapper[33013]: I0313 11:15:55.360471 33013 generic.go:334] "Generic (PLEG): container finished" podID="e16baf7d-8440-4431-a184-523ae34f6e6f" containerID="07801e3a787aa94a9b84927c3f31b029c0890d08b2bc3bc682ebd9d49be5999d" exitCode=0 Mar 13 11:15:55.362616 master-0 kubenswrapper[33013]: I0313 11:15:55.360548 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e16baf7d-8440-4431-a184-523ae34f6e6f","Type":"ContainerDied","Data":"07801e3a787aa94a9b84927c3f31b029c0890d08b2bc3bc682ebd9d49be5999d"} Mar 13 11:15:55.392994 master-0 kubenswrapper[33013]: I0313 11:15:55.392900 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=2.821530217 podStartE2EDuration="19.392877278s" podCreationTimestamp="2026-03-13 11:15:36 +0000 UTC" firstStartedPulling="2026-03-13 11:15:38.340681738 +0000 UTC m=+1121.816635087" lastFinishedPulling="2026-03-13 11:15:54.912028809 +0000 UTC m=+1138.387982148" observedRunningTime="2026-03-13 11:15:55.381021719 +0000 UTC m=+1138.856975058" watchObservedRunningTime="2026-03-13 11:15:55.392877278 +0000 UTC m=+1138.868830627" Mar 13 11:15:55.403536 master-0 kubenswrapper[33013]: I0313 11:15:55.403284 33013 scope.go:117] "RemoveContainer" containerID="8673b99b779e1fc8846502c51c269f9606cea2dec88659060ab2603e428af73b" Mar 13 11:15:55.403536 master-0 kubenswrapper[33013]: I0313 11:15:55.403335 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8f3b3913-1778-4d02-8259-2968de468f92" (UID: "8f3b3913-1778-4d02-8259-2968de468f92"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:15:55.404142 master-0 kubenswrapper[33013]: I0313 11:15:55.404051 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-config" (OuterVolumeSpecName: "config") pod "8f3b3913-1778-4d02-8259-2968de468f92" (UID: "8f3b3913-1778-4d02-8259-2968de468f92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:15:55.422546 master-0 kubenswrapper[33013]: I0313 11:15:55.417751 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8f3b3913-1778-4d02-8259-2968de468f92" (UID: "8f3b3913-1778-4d02-8259-2968de468f92"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:15:55.425124 master-0 kubenswrapper[33013]: I0313 11:15:55.423826 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8f3b3913-1778-4d02-8259-2968de468f92" (UID: "8f3b3913-1778-4d02-8259-2968de468f92"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:15:55.430055 master-0 kubenswrapper[33013]: I0313 11:15:55.429447 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 13 11:15:55.431704 master-0 kubenswrapper[33013]: I0313 11:15:55.431673 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jrnj\" (UniqueName: \"kubernetes.io/projected/8f3b3913-1778-4d02-8259-2968de468f92-kube-api-access-5jrnj\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:55.431704 master-0 kubenswrapper[33013]: I0313 11:15:55.431704 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:55.431979 master-0 kubenswrapper[33013]: I0313 11:15:55.431717 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:55.431979 master-0 kubenswrapper[33013]: I0313 11:15:55.431728 33013 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:55.431979 master-0 kubenswrapper[33013]: I0313 11:15:55.431737 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:55.457695 master-0 kubenswrapper[33013]: I0313 11:15:55.457590 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8f3b3913-1778-4d02-8259-2968de468f92" (UID: "8f3b3913-1778-4d02-8259-2968de468f92"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:15:55.469591 master-0 kubenswrapper[33013]: I0313 11:15:55.469419 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:15:55.510767 master-0 kubenswrapper[33013]: I0313 11:15:55.510714 33013 scope.go:117] "RemoveContainer" containerID="8d930d2d6a67ce74d20a000616e1e1e4c30d23c9a25b7f8a5840d0ae8062ce5e" Mar 13 11:15:55.512908 master-0 kubenswrapper[33013]: E0313 11:15:55.512856 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d930d2d6a67ce74d20a000616e1e1e4c30d23c9a25b7f8a5840d0ae8062ce5e\": container with ID starting with 8d930d2d6a67ce74d20a000616e1e1e4c30d23c9a25b7f8a5840d0ae8062ce5e not found: ID does not exist" containerID="8d930d2d6a67ce74d20a000616e1e1e4c30d23c9a25b7f8a5840d0ae8062ce5e" Mar 13 11:15:55.512992 master-0 kubenswrapper[33013]: I0313 11:15:55.512911 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d930d2d6a67ce74d20a000616e1e1e4c30d23c9a25b7f8a5840d0ae8062ce5e"} err="failed to get container status \"8d930d2d6a67ce74d20a000616e1e1e4c30d23c9a25b7f8a5840d0ae8062ce5e\": rpc error: code = NotFound desc = could not find container \"8d930d2d6a67ce74d20a000616e1e1e4c30d23c9a25b7f8a5840d0ae8062ce5e\": container with ID starting with 8d930d2d6a67ce74d20a000616e1e1e4c30d23c9a25b7f8a5840d0ae8062ce5e not found: ID does not exist" Mar 13 11:15:55.512992 master-0 kubenswrapper[33013]: I0313 11:15:55.512941 33013 scope.go:117] "RemoveContainer" containerID="8673b99b779e1fc8846502c51c269f9606cea2dec88659060ab2603e428af73b" Mar 13 11:15:55.513314 master-0 kubenswrapper[33013]: E0313 11:15:55.513274 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8673b99b779e1fc8846502c51c269f9606cea2dec88659060ab2603e428af73b\": container with ID starting with 8673b99b779e1fc8846502c51c269f9606cea2dec88659060ab2603e428af73b not found: ID does not exist" containerID="8673b99b779e1fc8846502c51c269f9606cea2dec88659060ab2603e428af73b" Mar 13 11:15:55.513355 master-0 kubenswrapper[33013]: I0313 11:15:55.513311 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8673b99b779e1fc8846502c51c269f9606cea2dec88659060ab2603e428af73b"} err="failed to get container status \"8673b99b779e1fc8846502c51c269f9606cea2dec88659060ab2603e428af73b\": rpc error: code = NotFound desc = could not find container \"8673b99b779e1fc8846502c51c269f9606cea2dec88659060ab2603e428af73b\": container with ID starting with 8673b99b779e1fc8846502c51c269f9606cea2dec88659060ab2603e428af73b not found: ID does not exist" Mar 13 11:15:55.534558 master-0 kubenswrapper[33013]: I0313 11:15:55.534480 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f3b3913-1778-4d02-8259-2968de468f92-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:55.753965 master-0 kubenswrapper[33013]: I0313 11:15:55.753803 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bfb994cb5-6ldqw"] Mar 13 11:15:55.801959 master-0 kubenswrapper[33013]: I0313 11:15:55.795211 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bfb994cb5-6ldqw"] Mar 13 11:15:56.398913 master-0 kubenswrapper[33013]: I0313 11:15:56.391304 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d753f204-26cd-4edc-944a-724f848ed71b","Type":"ContainerStarted","Data":"dfd6c5019e2e902e834ce3c1e03bc459235d893d7909d1450afbc6c09fe8ec57"} Mar 13 11:15:56.398913 master-0 kubenswrapper[33013]: I0313 11:15:56.391370 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d753f204-26cd-4edc-944a-724f848ed71b","Type":"ContainerStarted","Data":"6837496028a648d6d2345a26aabfed4c661813ab653d4f185df87bf4558e9e54"} Mar 13 11:15:56.398913 master-0 kubenswrapper[33013]: I0313 11:15:56.391385 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d753f204-26cd-4edc-944a-724f848ed71b","Type":"ContainerStarted","Data":"5949db1e2d1cdb6e59f8860d03534b18ceb9ec8e1794f7a579ca995f8a268975"} Mar 13 11:15:56.404758 master-0 kubenswrapper[33013]: I0313 11:15:56.404695 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e16baf7d-8440-4431-a184-523ae34f6e6f","Type":"ContainerStarted","Data":"71661a57a6ee800d19a01c1cd754f1347bb5e49fb56e15a3024cd1c0cb190e13"} Mar 13 11:15:56.419491 master-0 kubenswrapper[33013]: I0313 11:15:56.419392 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=9.41936535 podStartE2EDuration="9.41936535s" podCreationTimestamp="2026-03-13 11:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:15:56.413800111 +0000 UTC m=+1139.889753470" watchObservedRunningTime="2026-03-13 11:15:56.41936535 +0000 UTC m=+1139.895318709" Mar 13 11:15:56.753175 master-0 kubenswrapper[33013]: I0313 11:15:56.753125 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f3b3913-1778-4d02-8259-2968de468f92" path="/var/lib/kubelet/pods/8f3b3913-1778-4d02-8259-2968de468f92/volumes" Mar 13 11:15:57.135017 master-0 kubenswrapper[33013]: I0313 11:15:57.134954 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:57.146372 master-0 kubenswrapper[33013]: I0313 11:15:57.146320 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:57.216868 master-0 kubenswrapper[33013]: I0313 11:15:57.216157 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-config-data\") pod \"3991862a-ccb1-46f5-bdcc-74d3926df07a\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " Mar 13 11:15:57.216868 master-0 kubenswrapper[33013]: I0313 11:15:57.216295 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-combined-ca-bundle\") pod \"125890f6-344c-4e04-ad6f-b721846fc632\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " Mar 13 11:15:57.217405 master-0 kubenswrapper[33013]: I0313 11:15:57.217075 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-scripts\") pod \"125890f6-344c-4e04-ad6f-b721846fc632\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " Mar 13 11:15:57.217405 master-0 kubenswrapper[33013]: I0313 11:15:57.217164 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-scripts\") pod \"3991862a-ccb1-46f5-bdcc-74d3926df07a\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " Mar 13 11:15:57.217405 master-0 kubenswrapper[33013]: I0313 11:15:57.217248 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-combined-ca-bundle\") pod \"3991862a-ccb1-46f5-bdcc-74d3926df07a\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " Mar 13 11:15:57.217405 master-0 kubenswrapper[33013]: I0313 11:15:57.217305 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnl72\" (UniqueName: \"kubernetes.io/projected/125890f6-344c-4e04-ad6f-b721846fc632-kube-api-access-lnl72\") pod \"125890f6-344c-4e04-ad6f-b721846fc632\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " Mar 13 11:15:57.218100 master-0 kubenswrapper[33013]: I0313 11:15:57.218072 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-config-data\") pod \"125890f6-344c-4e04-ad6f-b721846fc632\" (UID: \"125890f6-344c-4e04-ad6f-b721846fc632\") " Mar 13 11:15:57.218283 master-0 kubenswrapper[33013]: I0313 11:15:57.218152 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gljz5\" (UniqueName: \"kubernetes.io/projected/3991862a-ccb1-46f5-bdcc-74d3926df07a-kube-api-access-gljz5\") pod \"3991862a-ccb1-46f5-bdcc-74d3926df07a\" (UID: \"3991862a-ccb1-46f5-bdcc-74d3926df07a\") " Mar 13 11:15:57.223126 master-0 kubenswrapper[33013]: I0313 11:15:57.222575 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-scripts" (OuterVolumeSpecName: "scripts") pod "3991862a-ccb1-46f5-bdcc-74d3926df07a" (UID: "3991862a-ccb1-46f5-bdcc-74d3926df07a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:57.223779 master-0 kubenswrapper[33013]: I0313 11:15:57.223562 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3991862a-ccb1-46f5-bdcc-74d3926df07a-kube-api-access-gljz5" (OuterVolumeSpecName: "kube-api-access-gljz5") pod "3991862a-ccb1-46f5-bdcc-74d3926df07a" (UID: "3991862a-ccb1-46f5-bdcc-74d3926df07a"). InnerVolumeSpecName "kube-api-access-gljz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:15:57.227464 master-0 kubenswrapper[33013]: I0313 11:15:57.227411 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-scripts" (OuterVolumeSpecName: "scripts") pod "125890f6-344c-4e04-ad6f-b721846fc632" (UID: "125890f6-344c-4e04-ad6f-b721846fc632"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:57.244968 master-0 kubenswrapper[33013]: I0313 11:15:57.244789 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/125890f6-344c-4e04-ad6f-b721846fc632-kube-api-access-lnl72" (OuterVolumeSpecName: "kube-api-access-lnl72") pod "125890f6-344c-4e04-ad6f-b721846fc632" (UID: "125890f6-344c-4e04-ad6f-b721846fc632"). InnerVolumeSpecName "kube-api-access-lnl72". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:15:57.250987 master-0 kubenswrapper[33013]: I0313 11:15:57.250922 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-config-data" (OuterVolumeSpecName: "config-data") pod "3991862a-ccb1-46f5-bdcc-74d3926df07a" (UID: "3991862a-ccb1-46f5-bdcc-74d3926df07a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:57.272084 master-0 kubenswrapper[33013]: I0313 11:15:57.272015 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3991862a-ccb1-46f5-bdcc-74d3926df07a" (UID: "3991862a-ccb1-46f5-bdcc-74d3926df07a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:57.272343 master-0 kubenswrapper[33013]: I0313 11:15:57.272148 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "125890f6-344c-4e04-ad6f-b721846fc632" (UID: "125890f6-344c-4e04-ad6f-b721846fc632"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:57.288806 master-0 kubenswrapper[33013]: I0313 11:15:57.288738 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-config-data" (OuterVolumeSpecName: "config-data") pod "125890f6-344c-4e04-ad6f-b721846fc632" (UID: "125890f6-344c-4e04-ad6f-b721846fc632"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:15:57.325179 master-0 kubenswrapper[33013]: I0313 11:15:57.325038 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:57.325179 master-0 kubenswrapper[33013]: I0313 11:15:57.325085 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:57.325179 master-0 kubenswrapper[33013]: I0313 11:15:57.325096 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:57.325179 master-0 kubenswrapper[33013]: I0313 11:15:57.325105 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:57.325179 master-0 kubenswrapper[33013]: I0313 11:15:57.325114 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3991862a-ccb1-46f5-bdcc-74d3926df07a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:57.325179 master-0 kubenswrapper[33013]: I0313 11:15:57.325126 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnl72\" (UniqueName: \"kubernetes.io/projected/125890f6-344c-4e04-ad6f-b721846fc632-kube-api-access-lnl72\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:57.325179 master-0 kubenswrapper[33013]: I0313 11:15:57.325136 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125890f6-344c-4e04-ad6f-b721846fc632-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:57.325179 master-0 kubenswrapper[33013]: I0313 11:15:57.325148 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gljz5\" (UniqueName: \"kubernetes.io/projected/3991862a-ccb1-46f5-bdcc-74d3926df07a-kube-api-access-gljz5\") on node \"master-0\" DevicePath \"\"" Mar 13 11:15:57.429796 master-0 kubenswrapper[33013]: I0313 11:15:57.429735 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e16baf7d-8440-4431-a184-523ae34f6e6f","Type":"ContainerStarted","Data":"7ba5d6fab762a410d2a4b187c16667be0a55b78b75cc6ecac6828dc126ce090c"} Mar 13 11:15:57.432688 master-0 kubenswrapper[33013]: I0313 11:15:57.432368 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-896qh" event={"ID":"3991862a-ccb1-46f5-bdcc-74d3926df07a","Type":"ContainerDied","Data":"cd346b33d203a18341f290afef1683e171d00b3a42298b060759dff2004338c8"} Mar 13 11:15:57.432688 master-0 kubenswrapper[33013]: I0313 11:15:57.432396 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd346b33d203a18341f290afef1683e171d00b3a42298b060759dff2004338c8" Mar 13 11:15:57.432688 master-0 kubenswrapper[33013]: I0313 11:15:57.432446 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-896qh" Mar 13 11:15:57.436767 master-0 kubenswrapper[33013]: I0313 11:15:57.436699 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8xndq" Mar 13 11:15:57.436898 master-0 kubenswrapper[33013]: I0313 11:15:57.436683 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8xndq" event={"ID":"125890f6-344c-4e04-ad6f-b721846fc632","Type":"ContainerDied","Data":"e3b42b1488e93076e797988f7e4e1f715f1156524f2b85fe6bd46513e3d93519"} Mar 13 11:15:57.437000 master-0 kubenswrapper[33013]: I0313 11:15:57.436982 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b42b1488e93076e797988f7e4e1f715f1156524f2b85fe6bd46513e3d93519" Mar 13 11:15:57.553454 master-0 kubenswrapper[33013]: I0313 11:15:57.553368 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 13 11:15:57.554149 master-0 kubenswrapper[33013]: E0313 11:15:57.554100 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3991862a-ccb1-46f5-bdcc-74d3926df07a" containerName="nova-manage" Mar 13 11:15:57.554149 master-0 kubenswrapper[33013]: I0313 11:15:57.554130 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="3991862a-ccb1-46f5-bdcc-74d3926df07a" containerName="nova-manage" Mar 13 11:15:57.554230 master-0 kubenswrapper[33013]: E0313 11:15:57.554152 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3b3913-1778-4d02-8259-2968de468f92" containerName="init" Mar 13 11:15:57.554230 master-0 kubenswrapper[33013]: I0313 11:15:57.554161 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3b3913-1778-4d02-8259-2968de468f92" containerName="init" Mar 13 11:15:57.554230 master-0 kubenswrapper[33013]: E0313 11:15:57.554192 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3b3913-1778-4d02-8259-2968de468f92" containerName="dnsmasq-dns" Mar 13 11:15:57.554230 master-0 kubenswrapper[33013]: I0313 11:15:57.554202 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3b3913-1778-4d02-8259-2968de468f92" containerName="dnsmasq-dns" Mar 13 11:15:57.554391 master-0 kubenswrapper[33013]: E0313 11:15:57.554237 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125890f6-344c-4e04-ad6f-b721846fc632" containerName="nova-cell1-conductor-db-sync" Mar 13 11:15:57.554391 master-0 kubenswrapper[33013]: I0313 11:15:57.554245 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="125890f6-344c-4e04-ad6f-b721846fc632" containerName="nova-cell1-conductor-db-sync" Mar 13 11:15:57.554514 master-0 kubenswrapper[33013]: I0313 11:15:57.554486 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="3991862a-ccb1-46f5-bdcc-74d3926df07a" containerName="nova-manage" Mar 13 11:15:57.554581 master-0 kubenswrapper[33013]: I0313 11:15:57.554514 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3b3913-1778-4d02-8259-2968de468f92" containerName="dnsmasq-dns" Mar 13 11:15:57.554581 master-0 kubenswrapper[33013]: I0313 11:15:57.554532 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="125890f6-344c-4e04-ad6f-b721846fc632" containerName="nova-cell1-conductor-db-sync" Mar 13 11:15:57.555717 master-0 kubenswrapper[33013]: I0313 11:15:57.555522 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 13 11:15:57.561273 master-0 kubenswrapper[33013]: I0313 11:15:57.561230 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 13 11:15:57.596723 master-0 kubenswrapper[33013]: I0313 11:15:57.593955 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 13 11:15:57.631344 master-0 kubenswrapper[33013]: I0313 11:15:57.631277 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90bcfe5-23f4-431e-b05c-dcc8f42306a0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a90bcfe5-23f4-431e-b05c-dcc8f42306a0\") " pod="openstack/nova-cell1-conductor-0" Mar 13 11:15:57.631958 master-0 kubenswrapper[33013]: I0313 11:15:57.631878 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwgrb\" (UniqueName: \"kubernetes.io/projected/a90bcfe5-23f4-431e-b05c-dcc8f42306a0-kube-api-access-bwgrb\") pod \"nova-cell1-conductor-0\" (UID: \"a90bcfe5-23f4-431e-b05c-dcc8f42306a0\") " pod="openstack/nova-cell1-conductor-0" Mar 13 11:15:57.632291 master-0 kubenswrapper[33013]: I0313 11:15:57.632248 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a90bcfe5-23f4-431e-b05c-dcc8f42306a0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a90bcfe5-23f4-431e-b05c-dcc8f42306a0\") " pod="openstack/nova-cell1-conductor-0" Mar 13 11:15:57.690399 master-0 kubenswrapper[33013]: I0313 11:15:57.690331 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:15:57.690673 master-0 kubenswrapper[33013]: I0313 11:15:57.690626 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8de14a46-90d5-4fc8-9823-ba42c7ab4c15" containerName="nova-api-log" containerID="cri-o://1ad4fb4e3c978f085749b386e86d8ea448072a6f1ab1799db650dad9f8cfe917" gracePeriod=30 Mar 13 11:15:57.693636 master-0 kubenswrapper[33013]: I0313 11:15:57.691238 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8de14a46-90d5-4fc8-9823-ba42c7ab4c15" containerName="nova-api-api" containerID="cri-o://40bac6011bbfc43c3123a59a3a36cfef05001daf7212c7c00cb6eeca338daf6b" gracePeriod=30 Mar 13 11:15:57.704498 master-0 kubenswrapper[33013]: I0313 11:15:57.704090 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:15:57.704498 master-0 kubenswrapper[33013]: I0313 11:15:57.704344 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e221b2cf-5955-4346-a540-24ccb3cbb967" containerName="nova-scheduler-scheduler" containerID="cri-o://1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5" gracePeriod=30 Mar 13 11:15:57.728529 master-0 kubenswrapper[33013]: I0313 11:15:57.728332 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 11:15:57.728529 master-0 kubenswrapper[33013]: I0313 11:15:57.728476 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 11:15:57.730546 master-0 kubenswrapper[33013]: I0313 11:15:57.728910 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 11:15:57.730546 master-0 kubenswrapper[33013]: I0313 11:15:57.728942 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 11:15:57.741363 master-0 kubenswrapper[33013]: I0313 11:15:57.741282 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90bcfe5-23f4-431e-b05c-dcc8f42306a0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a90bcfe5-23f4-431e-b05c-dcc8f42306a0\") " pod="openstack/nova-cell1-conductor-0" Mar 13 11:15:57.741670 master-0 kubenswrapper[33013]: I0313 11:15:57.741650 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwgrb\" (UniqueName: \"kubernetes.io/projected/a90bcfe5-23f4-431e-b05c-dcc8f42306a0-kube-api-access-bwgrb\") pod \"nova-cell1-conductor-0\" (UID: \"a90bcfe5-23f4-431e-b05c-dcc8f42306a0\") " pod="openstack/nova-cell1-conductor-0" Mar 13 11:15:57.741846 master-0 kubenswrapper[33013]: I0313 11:15:57.741821 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a90bcfe5-23f4-431e-b05c-dcc8f42306a0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a90bcfe5-23f4-431e-b05c-dcc8f42306a0\") " pod="openstack/nova-cell1-conductor-0" Mar 13 11:15:57.764641 master-0 kubenswrapper[33013]: I0313 11:15:57.761477 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90bcfe5-23f4-431e-b05c-dcc8f42306a0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a90bcfe5-23f4-431e-b05c-dcc8f42306a0\") " pod="openstack/nova-cell1-conductor-0" Mar 13 11:15:57.767744 master-0 kubenswrapper[33013]: I0313 11:15:57.767356 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a90bcfe5-23f4-431e-b05c-dcc8f42306a0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a90bcfe5-23f4-431e-b05c-dcc8f42306a0\") " pod="openstack/nova-cell1-conductor-0" Mar 13 11:15:57.777551 master-0 kubenswrapper[33013]: I0313 11:15:57.777487 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwgrb\" (UniqueName: \"kubernetes.io/projected/a90bcfe5-23f4-431e-b05c-dcc8f42306a0-kube-api-access-bwgrb\") pod \"nova-cell1-conductor-0\" (UID: \"a90bcfe5-23f4-431e-b05c-dcc8f42306a0\") " pod="openstack/nova-cell1-conductor-0" Mar 13 11:15:57.791678 master-0 kubenswrapper[33013]: I0313 11:15:57.787329 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:15:57.906679 master-0 kubenswrapper[33013]: I0313 11:15:57.905310 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 13 11:15:58.395722 master-0 kubenswrapper[33013]: E0313 11:15:58.392901 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 11:15:58.395722 master-0 kubenswrapper[33013]: E0313 11:15:58.394838 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 11:15:58.397005 master-0 kubenswrapper[33013]: E0313 11:15:58.396930 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 11:15:58.398095 master-0 kubenswrapper[33013]: E0313 11:15:58.397018 33013 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e221b2cf-5955-4346-a540-24ccb3cbb967" containerName="nova-scheduler-scheduler" Mar 13 11:15:58.469446 master-0 kubenswrapper[33013]: I0313 11:15:58.469373 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e16baf7d-8440-4431-a184-523ae34f6e6f","Type":"ContainerStarted","Data":"3542b685d6847c60534708341c738656a19f6b185305637cc7007c1073b19607"} Mar 13 11:15:58.470162 master-0 kubenswrapper[33013]: I0313 11:15:58.469707 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 13 11:15:58.470162 master-0 kubenswrapper[33013]: I0313 11:15:58.469770 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 13 11:15:58.479899 master-0 kubenswrapper[33013]: I0313 11:15:58.479838 33013 generic.go:334] "Generic (PLEG): container finished" podID="8de14a46-90d5-4fc8-9823-ba42c7ab4c15" containerID="1ad4fb4e3c978f085749b386e86d8ea448072a6f1ab1799db650dad9f8cfe917" exitCode=143 Mar 13 11:15:58.480196 master-0 kubenswrapper[33013]: I0313 11:15:58.479935 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8de14a46-90d5-4fc8-9823-ba42c7ab4c15","Type":"ContainerDied","Data":"1ad4fb4e3c978f085749b386e86d8ea448072a6f1ab1799db650dad9f8cfe917"} Mar 13 11:15:58.492251 master-0 kubenswrapper[33013]: I0313 11:15:58.488076 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 13 11:15:58.526574 master-0 kubenswrapper[33013]: I0313 11:15:58.525542 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=74.814893279 podStartE2EDuration="1m51.525520786s" podCreationTimestamp="2026-03-13 11:14:07 +0000 UTC" firstStartedPulling="2026-03-13 11:14:20.813070395 +0000 UTC m=+1044.289023744" lastFinishedPulling="2026-03-13 11:14:57.523697902 +0000 UTC m=+1080.999651251" observedRunningTime="2026-03-13 11:15:58.518771913 +0000 UTC m=+1141.994725262" watchObservedRunningTime="2026-03-13 11:15:58.525520786 +0000 UTC m=+1142.001474135" Mar 13 11:15:58.776624 master-0 kubenswrapper[33013]: I0313 11:15:58.775771 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d753f204-26cd-4edc-944a-724f848ed71b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.8:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:15:58.776624 master-0 kubenswrapper[33013]: I0313 11:15:58.775779 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d753f204-26cd-4edc-944a-724f848ed71b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.8:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:15:59.168043 master-0 kubenswrapper[33013]: I0313 11:15:59.167983 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Mar 13 11:15:59.499006 master-0 kubenswrapper[33013]: I0313 11:15:59.498793 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a90bcfe5-23f4-431e-b05c-dcc8f42306a0","Type":"ContainerStarted","Data":"a7780dbee0d867ea66350d3b40d4a9b1e4fb39a5529368eeea08003b14278ac8"} Mar 13 11:15:59.499006 master-0 kubenswrapper[33013]: I0313 11:15:59.498852 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a90bcfe5-23f4-431e-b05c-dcc8f42306a0","Type":"ContainerStarted","Data":"fafc15796584db9911ac650c1183b57392731e1178c6ec97e28b789217b078e8"} Mar 13 11:15:59.499757 master-0 kubenswrapper[33013]: I0313 11:15:59.499015 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d753f204-26cd-4edc-944a-724f848ed71b" containerName="nova-metadata-log" containerID="cri-o://6837496028a648d6d2345a26aabfed4c661813ab653d4f185df87bf4558e9e54" gracePeriod=30 Mar 13 11:15:59.500028 master-0 kubenswrapper[33013]: I0313 11:15:59.499986 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d753f204-26cd-4edc-944a-724f848ed71b" containerName="nova-metadata-metadata" containerID="cri-o://dfd6c5019e2e902e834ce3c1e03bc459235d893d7909d1450afbc6c09fe8ec57" gracePeriod=30 Mar 13 11:15:59.589764 master-0 kubenswrapper[33013]: I0313 11:15:59.589663 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.589643545 podStartE2EDuration="2.589643545s" podCreationTimestamp="2026-03-13 11:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:15:59.585101495 +0000 UTC m=+1143.061054844" watchObservedRunningTime="2026-03-13 11:15:59.589643545 +0000 UTC m=+1143.065596894" Mar 13 11:16:00.081642 master-0 kubenswrapper[33013]: I0313 11:16:00.079247 33013 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bfb994cb5-6ldqw" podUID="8f3b3913-1778-4d02-8259-2968de468f92" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.249:5353: i/o timeout" Mar 13 11:16:00.513997 master-0 kubenswrapper[33013]: I0313 11:16:00.513814 33013 generic.go:334] "Generic (PLEG): container finished" podID="d753f204-26cd-4edc-944a-724f848ed71b" containerID="6837496028a648d6d2345a26aabfed4c661813ab653d4f185df87bf4558e9e54" exitCode=143 Mar 13 11:16:00.515207 master-0 kubenswrapper[33013]: I0313 11:16:00.515127 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d753f204-26cd-4edc-944a-724f848ed71b","Type":"ContainerDied","Data":"6837496028a648d6d2345a26aabfed4c661813ab653d4f185df87bf4558e9e54"} Mar 13 11:16:00.515706 master-0 kubenswrapper[33013]: I0313 11:16:00.515681 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 13 11:16:00.555017 master-0 kubenswrapper[33013]: I0313 11:16:00.554944 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 13 11:16:00.763219 master-0 kubenswrapper[33013]: I0313 11:16:00.763173 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Mar 13 11:16:01.559968 master-0 kubenswrapper[33013]: I0313 11:16:01.559860 33013 generic.go:334] "Generic (PLEG): container finished" podID="8de14a46-90d5-4fc8-9823-ba42c7ab4c15" containerID="40bac6011bbfc43c3123a59a3a36cfef05001daf7212c7c00cb6eeca338daf6b" exitCode=0 Mar 13 11:16:01.560840 master-0 kubenswrapper[33013]: I0313 11:16:01.560153 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8de14a46-90d5-4fc8-9823-ba42c7ab4c15","Type":"ContainerDied","Data":"40bac6011bbfc43c3123a59a3a36cfef05001daf7212c7c00cb6eeca338daf6b"} Mar 13 11:16:01.567981 master-0 kubenswrapper[33013]: I0313 11:16:01.567905 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 13 11:16:01.723323 master-0 kubenswrapper[33013]: I0313 11:16:01.723272 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 11:16:01.864110 master-0 kubenswrapper[33013]: I0313 11:16:01.863807 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zvdv\" (UniqueName: \"kubernetes.io/projected/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-kube-api-access-6zvdv\") pod \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " Mar 13 11:16:01.864385 master-0 kubenswrapper[33013]: I0313 11:16:01.864211 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-combined-ca-bundle\") pod \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " Mar 13 11:16:01.864385 master-0 kubenswrapper[33013]: I0313 11:16:01.864272 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-logs\") pod \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " Mar 13 11:16:01.864497 master-0 kubenswrapper[33013]: I0313 11:16:01.864385 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-config-data\") pod \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\" (UID: \"8de14a46-90d5-4fc8-9823-ba42c7ab4c15\") " Mar 13 11:16:01.866409 master-0 kubenswrapper[33013]: I0313 11:16:01.866346 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-logs" (OuterVolumeSpecName: "logs") pod "8de14a46-90d5-4fc8-9823-ba42c7ab4c15" (UID: "8de14a46-90d5-4fc8-9823-ba42c7ab4c15"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:16:01.871088 master-0 kubenswrapper[33013]: I0313 11:16:01.870866 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-kube-api-access-6zvdv" (OuterVolumeSpecName: "kube-api-access-6zvdv") pod "8de14a46-90d5-4fc8-9823-ba42c7ab4c15" (UID: "8de14a46-90d5-4fc8-9823-ba42c7ab4c15"). InnerVolumeSpecName "kube-api-access-6zvdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:16:01.899607 master-0 kubenswrapper[33013]: I0313 11:16:01.899513 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8de14a46-90d5-4fc8-9823-ba42c7ab4c15" (UID: "8de14a46-90d5-4fc8-9823-ba42c7ab4c15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:01.944296 master-0 kubenswrapper[33013]: I0313 11:16:01.944223 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-config-data" (OuterVolumeSpecName: "config-data") pod "8de14a46-90d5-4fc8-9823-ba42c7ab4c15" (UID: "8de14a46-90d5-4fc8-9823-ba42c7ab4c15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:01.969805 master-0 kubenswrapper[33013]: I0313 11:16:01.969705 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:01.969805 master-0 kubenswrapper[33013]: I0313 11:16:01.969794 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zvdv\" (UniqueName: \"kubernetes.io/projected/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-kube-api-access-6zvdv\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:01.969805 master-0 kubenswrapper[33013]: I0313 11:16:01.969812 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:01.969805 master-0 kubenswrapper[33013]: I0313 11:16:01.969823 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8de14a46-90d5-4fc8-9823-ba42c7ab4c15-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:02.340634 master-0 kubenswrapper[33013]: I0313 11:16:02.340572 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 11:16:02.480740 master-0 kubenswrapper[33013]: I0313 11:16:02.480671 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbhtv\" (UniqueName: \"kubernetes.io/projected/e221b2cf-5955-4346-a540-24ccb3cbb967-kube-api-access-nbhtv\") pod \"e221b2cf-5955-4346-a540-24ccb3cbb967\" (UID: \"e221b2cf-5955-4346-a540-24ccb3cbb967\") " Mar 13 11:16:02.480740 master-0 kubenswrapper[33013]: I0313 11:16:02.480746 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e221b2cf-5955-4346-a540-24ccb3cbb967-combined-ca-bundle\") pod \"e221b2cf-5955-4346-a540-24ccb3cbb967\" (UID: \"e221b2cf-5955-4346-a540-24ccb3cbb967\") " Mar 13 11:16:02.481063 master-0 kubenswrapper[33013]: I0313 11:16:02.480781 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e221b2cf-5955-4346-a540-24ccb3cbb967-config-data\") pod \"e221b2cf-5955-4346-a540-24ccb3cbb967\" (UID: \"e221b2cf-5955-4346-a540-24ccb3cbb967\") " Mar 13 11:16:02.483994 master-0 kubenswrapper[33013]: I0313 11:16:02.483878 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e221b2cf-5955-4346-a540-24ccb3cbb967-kube-api-access-nbhtv" (OuterVolumeSpecName: "kube-api-access-nbhtv") pod "e221b2cf-5955-4346-a540-24ccb3cbb967" (UID: "e221b2cf-5955-4346-a540-24ccb3cbb967"). InnerVolumeSpecName "kube-api-access-nbhtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:16:02.510381 master-0 kubenswrapper[33013]: I0313 11:16:02.510218 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e221b2cf-5955-4346-a540-24ccb3cbb967-config-data" (OuterVolumeSpecName: "config-data") pod "e221b2cf-5955-4346-a540-24ccb3cbb967" (UID: "e221b2cf-5955-4346-a540-24ccb3cbb967"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:02.511028 master-0 kubenswrapper[33013]: I0313 11:16:02.510568 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e221b2cf-5955-4346-a540-24ccb3cbb967-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e221b2cf-5955-4346-a540-24ccb3cbb967" (UID: "e221b2cf-5955-4346-a540-24ccb3cbb967"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:02.574418 master-0 kubenswrapper[33013]: I0313 11:16:02.574343 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8de14a46-90d5-4fc8-9823-ba42c7ab4c15","Type":"ContainerDied","Data":"a5a9423705e21cfb807979aab58b8ca6d1ec7140c35d1aca8b721c935c274815"} Mar 13 11:16:02.574943 master-0 kubenswrapper[33013]: I0313 11:16:02.574428 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 11:16:02.575598 master-0 kubenswrapper[33013]: I0313 11:16:02.574429 33013 scope.go:117] "RemoveContainer" containerID="40bac6011bbfc43c3123a59a3a36cfef05001daf7212c7c00cb6eeca338daf6b" Mar 13 11:16:02.578519 master-0 kubenswrapper[33013]: I0313 11:16:02.578290 33013 generic.go:334] "Generic (PLEG): container finished" podID="e221b2cf-5955-4346-a540-24ccb3cbb967" containerID="1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5" exitCode=0 Mar 13 11:16:02.578519 master-0 kubenswrapper[33013]: I0313 11:16:02.578325 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 11:16:02.578519 master-0 kubenswrapper[33013]: I0313 11:16:02.578365 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e221b2cf-5955-4346-a540-24ccb3cbb967","Type":"ContainerDied","Data":"1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5"} Mar 13 11:16:02.578519 master-0 kubenswrapper[33013]: I0313 11:16:02.578421 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e221b2cf-5955-4346-a540-24ccb3cbb967","Type":"ContainerDied","Data":"a62e1a2a952327d379f3959f34c40cd12d11446e018b3199c073c4569ff6dcd1"} Mar 13 11:16:02.583845 master-0 kubenswrapper[33013]: I0313 11:16:02.583797 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e221b2cf-5955-4346-a540-24ccb3cbb967-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:02.583845 master-0 kubenswrapper[33013]: I0313 11:16:02.583835 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e221b2cf-5955-4346-a540-24ccb3cbb967-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:02.583845 master-0 kubenswrapper[33013]: I0313 11:16:02.583846 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbhtv\" (UniqueName: \"kubernetes.io/projected/e221b2cf-5955-4346-a540-24ccb3cbb967-kube-api-access-nbhtv\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:02.602475 master-0 kubenswrapper[33013]: I0313 11:16:02.602428 33013 scope.go:117] "RemoveContainer" containerID="1ad4fb4e3c978f085749b386e86d8ea448072a6f1ab1799db650dad9f8cfe917" Mar 13 11:16:02.658460 master-0 kubenswrapper[33013]: I0313 11:16:02.655856 33013 scope.go:117] "RemoveContainer" containerID="1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5" Mar 13 11:16:02.666815 master-0 kubenswrapper[33013]: I0313 11:16:02.665884 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:16:02.713698 master-0 kubenswrapper[33013]: I0313 11:16:02.713645 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:16:02.750721 master-0 kubenswrapper[33013]: I0313 11:16:02.750662 33013 scope.go:117] "RemoveContainer" containerID="1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5" Mar 13 11:16:02.755422 master-0 kubenswrapper[33013]: E0313 11:16:02.755374 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5\": container with ID starting with 1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5 not found: ID does not exist" containerID="1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5" Mar 13 11:16:02.755615 master-0 kubenswrapper[33013]: I0313 11:16:02.755430 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5"} err="failed to get container status \"1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5\": rpc error: code = NotFound desc = could not find container \"1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5\": container with ID starting with 1ea369549783ba65d1d8dae19d796f07f02cc7c599c0c2259138fcb606f31db5 not found: ID does not exist" Mar 13 11:16:02.757884 master-0 kubenswrapper[33013]: E0313 11:16:02.757829 33013 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8de14a46_90d5_4fc8_9823_ba42c7ab4c15.slice/crio-a5a9423705e21cfb807979aab58b8ca6d1ec7140c35d1aca8b721c935c274815\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode221b2cf_5955_4346_a540_24ccb3cbb967.slice\": RecentStats: unable to find data in memory cache]" Mar 13 11:16:02.759289 master-0 kubenswrapper[33013]: I0313 11:16:02.759255 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e221b2cf-5955-4346-a540-24ccb3cbb967" path="/var/lib/kubelet/pods/e221b2cf-5955-4346-a540-24ccb3cbb967/volumes" Mar 13 11:16:02.760001 master-0 kubenswrapper[33013]: I0313 11:16:02.759958 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:02.760001 master-0 kubenswrapper[33013]: I0313 11:16:02.759992 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:16:02.760650 master-0 kubenswrapper[33013]: E0313 11:16:02.760548 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de14a46-90d5-4fc8-9823-ba42c7ab4c15" containerName="nova-api-log" Mar 13 11:16:02.760650 master-0 kubenswrapper[33013]: I0313 11:16:02.760570 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de14a46-90d5-4fc8-9823-ba42c7ab4c15" containerName="nova-api-log" Mar 13 11:16:02.760650 master-0 kubenswrapper[33013]: E0313 11:16:02.760626 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e221b2cf-5955-4346-a540-24ccb3cbb967" containerName="nova-scheduler-scheduler" Mar 13 11:16:02.760650 master-0 kubenswrapper[33013]: I0313 11:16:02.760633 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="e221b2cf-5955-4346-a540-24ccb3cbb967" containerName="nova-scheduler-scheduler" Mar 13 11:16:02.761008 master-0 kubenswrapper[33013]: E0313 11:16:02.760708 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de14a46-90d5-4fc8-9823-ba42c7ab4c15" containerName="nova-api-api" Mar 13 11:16:02.761008 master-0 kubenswrapper[33013]: I0313 11:16:02.760717 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de14a46-90d5-4fc8-9823-ba42c7ab4c15" containerName="nova-api-api" Mar 13 11:16:02.761120 master-0 kubenswrapper[33013]: I0313 11:16:02.761010 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="8de14a46-90d5-4fc8-9823-ba42c7ab4c15" containerName="nova-api-log" Mar 13 11:16:02.761120 master-0 kubenswrapper[33013]: I0313 11:16:02.761022 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="e221b2cf-5955-4346-a540-24ccb3cbb967" containerName="nova-scheduler-scheduler" Mar 13 11:16:02.761120 master-0 kubenswrapper[33013]: I0313 11:16:02.761046 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="8de14a46-90d5-4fc8-9823-ba42c7ab4c15" containerName="nova-api-api" Mar 13 11:16:02.761852 master-0 kubenswrapper[33013]: I0313 11:16:02.761795 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:02.761929 master-0 kubenswrapper[33013]: I0313 11:16:02.761908 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 11:16:02.763819 master-0 kubenswrapper[33013]: I0313 11:16:02.763740 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 13 11:16:02.768334 master-0 kubenswrapper[33013]: I0313 11:16:02.768300 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:16:02.814608 master-0 kubenswrapper[33013]: I0313 11:16:02.814533 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:02.816789 master-0 kubenswrapper[33013]: I0313 11:16:02.816750 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 11:16:02.818851 master-0 kubenswrapper[33013]: I0313 11:16:02.818808 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 11:16:02.826169 master-0 kubenswrapper[33013]: I0313 11:16:02.826109 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:02.913005 master-0 kubenswrapper[33013]: I0313 11:16:02.906853 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twpx9\" (UniqueName: \"kubernetes.io/projected/25164d15-596b-4a2d-a3f0-f79e373e1956-kube-api-access-twpx9\") pod \"nova-api-0\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " pod="openstack/nova-api-0" Mar 13 11:16:02.913005 master-0 kubenswrapper[33013]: I0313 11:16:02.907034 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecbe5862-3d02-4485-9892-a059eaa14438-config-data\") pod \"nova-scheduler-0\" (UID: \"ecbe5862-3d02-4485-9892-a059eaa14438\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:02.913005 master-0 kubenswrapper[33013]: I0313 11:16:02.907099 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25164d15-596b-4a2d-a3f0-f79e373e1956-logs\") pod \"nova-api-0\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " pod="openstack/nova-api-0" Mar 13 11:16:02.913005 master-0 kubenswrapper[33013]: I0313 11:16:02.907222 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25164d15-596b-4a2d-a3f0-f79e373e1956-config-data\") pod \"nova-api-0\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " pod="openstack/nova-api-0" Mar 13 11:16:02.913005 master-0 kubenswrapper[33013]: I0313 11:16:02.907314 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecbe5862-3d02-4485-9892-a059eaa14438-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ecbe5862-3d02-4485-9892-a059eaa14438\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:02.913005 master-0 kubenswrapper[33013]: I0313 11:16:02.907344 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25164d15-596b-4a2d-a3f0-f79e373e1956-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " pod="openstack/nova-api-0" Mar 13 11:16:02.913005 master-0 kubenswrapper[33013]: I0313 11:16:02.907396 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bvjx\" (UniqueName: \"kubernetes.io/projected/ecbe5862-3d02-4485-9892-a059eaa14438-kube-api-access-8bvjx\") pod \"nova-scheduler-0\" (UID: \"ecbe5862-3d02-4485-9892-a059eaa14438\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:03.050049 master-0 kubenswrapper[33013]: I0313 11:16:03.047849 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecbe5862-3d02-4485-9892-a059eaa14438-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ecbe5862-3d02-4485-9892-a059eaa14438\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:03.050049 master-0 kubenswrapper[33013]: I0313 11:16:03.047922 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25164d15-596b-4a2d-a3f0-f79e373e1956-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " pod="openstack/nova-api-0" Mar 13 11:16:03.050049 master-0 kubenswrapper[33013]: I0313 11:16:03.048686 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bvjx\" (UniqueName: \"kubernetes.io/projected/ecbe5862-3d02-4485-9892-a059eaa14438-kube-api-access-8bvjx\") pod \"nova-scheduler-0\" (UID: \"ecbe5862-3d02-4485-9892-a059eaa14438\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:03.050049 master-0 kubenswrapper[33013]: I0313 11:16:03.048965 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twpx9\" (UniqueName: \"kubernetes.io/projected/25164d15-596b-4a2d-a3f0-f79e373e1956-kube-api-access-twpx9\") pod \"nova-api-0\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " pod="openstack/nova-api-0" Mar 13 11:16:03.050049 master-0 kubenswrapper[33013]: I0313 11:16:03.049136 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecbe5862-3d02-4485-9892-a059eaa14438-config-data\") pod \"nova-scheduler-0\" (UID: \"ecbe5862-3d02-4485-9892-a059eaa14438\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:03.050049 master-0 kubenswrapper[33013]: I0313 11:16:03.049202 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25164d15-596b-4a2d-a3f0-f79e373e1956-logs\") pod \"nova-api-0\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " pod="openstack/nova-api-0" Mar 13 11:16:03.050049 master-0 kubenswrapper[33013]: I0313 11:16:03.049371 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25164d15-596b-4a2d-a3f0-f79e373e1956-config-data\") pod \"nova-api-0\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " pod="openstack/nova-api-0" Mar 13 11:16:03.052138 master-0 kubenswrapper[33013]: I0313 11:16:03.052096 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecbe5862-3d02-4485-9892-a059eaa14438-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ecbe5862-3d02-4485-9892-a059eaa14438\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:03.054543 master-0 kubenswrapper[33013]: I0313 11:16:03.054486 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25164d15-596b-4a2d-a3f0-f79e373e1956-config-data\") pod \"nova-api-0\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " pod="openstack/nova-api-0" Mar 13 11:16:03.054778 master-0 kubenswrapper[33013]: I0313 11:16:03.054759 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25164d15-596b-4a2d-a3f0-f79e373e1956-logs\") pod \"nova-api-0\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " pod="openstack/nova-api-0" Mar 13 11:16:03.058530 master-0 kubenswrapper[33013]: I0313 11:16:03.058386 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecbe5862-3d02-4485-9892-a059eaa14438-config-data\") pod \"nova-scheduler-0\" (UID: \"ecbe5862-3d02-4485-9892-a059eaa14438\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:03.067080 master-0 kubenswrapper[33013]: I0313 11:16:03.066576 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25164d15-596b-4a2d-a3f0-f79e373e1956-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " pod="openstack/nova-api-0" Mar 13 11:16:03.086618 master-0 kubenswrapper[33013]: I0313 11:16:03.082503 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bvjx\" (UniqueName: \"kubernetes.io/projected/ecbe5862-3d02-4485-9892-a059eaa14438-kube-api-access-8bvjx\") pod \"nova-scheduler-0\" (UID: \"ecbe5862-3d02-4485-9892-a059eaa14438\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:03.086618 master-0 kubenswrapper[33013]: I0313 11:16:03.083158 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twpx9\" (UniqueName: \"kubernetes.io/projected/25164d15-596b-4a2d-a3f0-f79e373e1956-kube-api-access-twpx9\") pod \"nova-api-0\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " pod="openstack/nova-api-0" Mar 13 11:16:03.097618 master-0 kubenswrapper[33013]: I0313 11:16:03.093304 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 11:16:03.155199 master-0 kubenswrapper[33013]: I0313 11:16:03.155127 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 11:16:03.604722 master-0 kubenswrapper[33013]: I0313 11:16:03.604642 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:16:03.605860 master-0 kubenswrapper[33013]: W0313 11:16:03.605816 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecbe5862_3d02_4485_9892_a059eaa14438.slice/crio-7372f40b4c25026cd813eb21177d53d3c087d1bb32742773c686da8c74fb1839 WatchSource:0}: Error finding container 7372f40b4c25026cd813eb21177d53d3c087d1bb32742773c686da8c74fb1839: Status 404 returned error can't find the container with id 7372f40b4c25026cd813eb21177d53d3c087d1bb32742773c686da8c74fb1839 Mar 13 11:16:03.808776 master-0 kubenswrapper[33013]: I0313 11:16:03.808709 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:04.566367 master-0 kubenswrapper[33013]: I0313 11:16:04.566282 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 11:16:04.644738 master-0 kubenswrapper[33013]: I0313 11:16:04.629527 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2ksw\" (UniqueName: \"kubernetes.io/projected/d753f204-26cd-4edc-944a-724f848ed71b-kube-api-access-m2ksw\") pod \"d753f204-26cd-4edc-944a-724f848ed71b\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " Mar 13 11:16:04.644738 master-0 kubenswrapper[33013]: I0313 11:16:04.629585 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-combined-ca-bundle\") pod \"d753f204-26cd-4edc-944a-724f848ed71b\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " Mar 13 11:16:04.644738 master-0 kubenswrapper[33013]: I0313 11:16:04.629744 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-nova-metadata-tls-certs\") pod \"d753f204-26cd-4edc-944a-724f848ed71b\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " Mar 13 11:16:04.644738 master-0 kubenswrapper[33013]: I0313 11:16:04.630769 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-config-data\") pod \"d753f204-26cd-4edc-944a-724f848ed71b\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " Mar 13 11:16:04.644738 master-0 kubenswrapper[33013]: I0313 11:16:04.630986 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d753f204-26cd-4edc-944a-724f848ed71b-logs\") pod \"d753f204-26cd-4edc-944a-724f848ed71b\" (UID: \"d753f204-26cd-4edc-944a-724f848ed71b\") " Mar 13 11:16:04.644738 master-0 kubenswrapper[33013]: I0313 11:16:04.632217 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d753f204-26cd-4edc-944a-724f848ed71b-logs" (OuterVolumeSpecName: "logs") pod "d753f204-26cd-4edc-944a-724f848ed71b" (UID: "d753f204-26cd-4edc-944a-724f848ed71b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:16:04.644738 master-0 kubenswrapper[33013]: I0313 11:16:04.638898 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d753f204-26cd-4edc-944a-724f848ed71b-kube-api-access-m2ksw" (OuterVolumeSpecName: "kube-api-access-m2ksw") pod "d753f204-26cd-4edc-944a-724f848ed71b" (UID: "d753f204-26cd-4edc-944a-724f848ed71b"). InnerVolumeSpecName "kube-api-access-m2ksw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:16:04.644738 master-0 kubenswrapper[33013]: I0313 11:16:04.641049 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ecbe5862-3d02-4485-9892-a059eaa14438","Type":"ContainerStarted","Data":"a39a75325780317f3062acb728072ded2c61eb3abf704082ec976927b549442b"} Mar 13 11:16:04.644738 master-0 kubenswrapper[33013]: I0313 11:16:04.641118 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ecbe5862-3d02-4485-9892-a059eaa14438","Type":"ContainerStarted","Data":"7372f40b4c25026cd813eb21177d53d3c087d1bb32742773c686da8c74fb1839"} Mar 13 11:16:04.645618 master-0 kubenswrapper[33013]: I0313 11:16:04.645366 33013 generic.go:334] "Generic (PLEG): container finished" podID="d753f204-26cd-4edc-944a-724f848ed71b" containerID="dfd6c5019e2e902e834ce3c1e03bc459235d893d7909d1450afbc6c09fe8ec57" exitCode=0 Mar 13 11:16:04.645618 master-0 kubenswrapper[33013]: I0313 11:16:04.645447 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 11:16:04.645618 master-0 kubenswrapper[33013]: I0313 11:16:04.645457 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d753f204-26cd-4edc-944a-724f848ed71b","Type":"ContainerDied","Data":"dfd6c5019e2e902e834ce3c1e03bc459235d893d7909d1450afbc6c09fe8ec57"} Mar 13 11:16:04.645750 master-0 kubenswrapper[33013]: I0313 11:16:04.645644 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d753f204-26cd-4edc-944a-724f848ed71b","Type":"ContainerDied","Data":"5949db1e2d1cdb6e59f8860d03534b18ceb9ec8e1794f7a579ca995f8a268975"} Mar 13 11:16:04.645750 master-0 kubenswrapper[33013]: I0313 11:16:04.645674 33013 scope.go:117] "RemoveContainer" containerID="dfd6c5019e2e902e834ce3c1e03bc459235d893d7909d1450afbc6c09fe8ec57" Mar 13 11:16:04.666616 master-0 kubenswrapper[33013]: I0313 11:16:04.661745 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25164d15-596b-4a2d-a3f0-f79e373e1956","Type":"ContainerStarted","Data":"da2c1ed9b97ce45fb1225a7553ff2175fab3715cf21f4f1c4842ee63d2b03981"} Mar 13 11:16:04.666616 master-0 kubenswrapper[33013]: I0313 11:16:04.661847 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25164d15-596b-4a2d-a3f0-f79e373e1956","Type":"ContainerStarted","Data":"57cdc257c003163e5b68d0b6efc94de5cc0c5590d597f56c8dc75cad17c8f76d"} Mar 13 11:16:04.666616 master-0 kubenswrapper[33013]: I0313 11:16:04.661863 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25164d15-596b-4a2d-a3f0-f79e373e1956","Type":"ContainerStarted","Data":"d034e3aee59dca3945bfeb38217d39f355c6752bbde65336d5ef838ed5783837"} Mar 13 11:16:04.670616 master-0 kubenswrapper[33013]: I0313 11:16:04.667072 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.66704437 podStartE2EDuration="2.66704437s" podCreationTimestamp="2026-03-13 11:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:16:04.662353416 +0000 UTC m=+1148.138306775" watchObservedRunningTime="2026-03-13 11:16:04.66704437 +0000 UTC m=+1148.142997729" Mar 13 11:16:04.683654 master-0 kubenswrapper[33013]: I0313 11:16:04.679316 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-config-data" (OuterVolumeSpecName: "config-data") pod "d753f204-26cd-4edc-944a-724f848ed71b" (UID: "d753f204-26cd-4edc-944a-724f848ed71b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:04.692613 master-0 kubenswrapper[33013]: I0313 11:16:04.692221 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.692142118 podStartE2EDuration="2.692142118s" podCreationTimestamp="2026-03-13 11:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:16:04.682059779 +0000 UTC m=+1148.158013148" watchObservedRunningTime="2026-03-13 11:16:04.692142118 +0000 UTC m=+1148.168095467" Mar 13 11:16:04.707166 master-0 kubenswrapper[33013]: I0313 11:16:04.702710 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d753f204-26cd-4edc-944a-724f848ed71b" (UID: "d753f204-26cd-4edc-944a-724f848ed71b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:04.720424 master-0 kubenswrapper[33013]: I0313 11:16:04.719971 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d753f204-26cd-4edc-944a-724f848ed71b" (UID: "d753f204-26cd-4edc-944a-724f848ed71b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:04.735934 master-0 kubenswrapper[33013]: I0313 11:16:04.735868 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2ksw\" (UniqueName: \"kubernetes.io/projected/d753f204-26cd-4edc-944a-724f848ed71b-kube-api-access-m2ksw\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:04.736262 master-0 kubenswrapper[33013]: I0313 11:16:04.736187 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8de14a46-90d5-4fc8-9823-ba42c7ab4c15" path="/var/lib/kubelet/pods/8de14a46-90d5-4fc8-9823-ba42c7ab4c15/volumes" Mar 13 11:16:04.738609 master-0 kubenswrapper[33013]: I0313 11:16:04.738523 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:04.739036 master-0 kubenswrapper[33013]: I0313 11:16:04.738991 33013 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:04.739036 master-0 kubenswrapper[33013]: I0313 11:16:04.739029 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d753f204-26cd-4edc-944a-724f848ed71b-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:04.739150 master-0 kubenswrapper[33013]: I0313 11:16:04.739050 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d753f204-26cd-4edc-944a-724f848ed71b-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:04.781517 master-0 kubenswrapper[33013]: I0313 11:16:04.781467 33013 scope.go:117] "RemoveContainer" containerID="6837496028a648d6d2345a26aabfed4c661813ab653d4f185df87bf4558e9e54" Mar 13 11:16:04.811798 master-0 kubenswrapper[33013]: I0313 11:16:04.811753 33013 scope.go:117] "RemoveContainer" containerID="dfd6c5019e2e902e834ce3c1e03bc459235d893d7909d1450afbc6c09fe8ec57" Mar 13 11:16:04.812386 master-0 kubenswrapper[33013]: E0313 11:16:04.812348 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfd6c5019e2e902e834ce3c1e03bc459235d893d7909d1450afbc6c09fe8ec57\": container with ID starting with dfd6c5019e2e902e834ce3c1e03bc459235d893d7909d1450afbc6c09fe8ec57 not found: ID does not exist" containerID="dfd6c5019e2e902e834ce3c1e03bc459235d893d7909d1450afbc6c09fe8ec57" Mar 13 11:16:04.812447 master-0 kubenswrapper[33013]: I0313 11:16:04.812383 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfd6c5019e2e902e834ce3c1e03bc459235d893d7909d1450afbc6c09fe8ec57"} err="failed to get container status \"dfd6c5019e2e902e834ce3c1e03bc459235d893d7909d1450afbc6c09fe8ec57\": rpc error: code = NotFound desc = could not find container \"dfd6c5019e2e902e834ce3c1e03bc459235d893d7909d1450afbc6c09fe8ec57\": container with ID starting with dfd6c5019e2e902e834ce3c1e03bc459235d893d7909d1450afbc6c09fe8ec57 not found: ID does not exist" Mar 13 11:16:04.812447 master-0 kubenswrapper[33013]: I0313 11:16:04.812405 33013 scope.go:117] "RemoveContainer" containerID="6837496028a648d6d2345a26aabfed4c661813ab653d4f185df87bf4558e9e54" Mar 13 11:16:04.812783 master-0 kubenswrapper[33013]: E0313 11:16:04.812750 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6837496028a648d6d2345a26aabfed4c661813ab653d4f185df87bf4558e9e54\": container with ID starting with 6837496028a648d6d2345a26aabfed4c661813ab653d4f185df87bf4558e9e54 not found: ID does not exist" containerID="6837496028a648d6d2345a26aabfed4c661813ab653d4f185df87bf4558e9e54" Mar 13 11:16:04.812783 master-0 kubenswrapper[33013]: I0313 11:16:04.812776 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6837496028a648d6d2345a26aabfed4c661813ab653d4f185df87bf4558e9e54"} err="failed to get container status \"6837496028a648d6d2345a26aabfed4c661813ab653d4f185df87bf4558e9e54\": rpc error: code = NotFound desc = could not find container \"6837496028a648d6d2345a26aabfed4c661813ab653d4f185df87bf4558e9e54\": container with ID starting with 6837496028a648d6d2345a26aabfed4c661813ab653d4f185df87bf4558e9e54 not found: ID does not exist" Mar 13 11:16:04.980614 master-0 kubenswrapper[33013]: I0313 11:16:04.980452 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:16:05.003485 master-0 kubenswrapper[33013]: I0313 11:16:05.003416 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:16:05.034785 master-0 kubenswrapper[33013]: I0313 11:16:05.034725 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:16:05.035299 master-0 kubenswrapper[33013]: E0313 11:16:05.035271 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d753f204-26cd-4edc-944a-724f848ed71b" containerName="nova-metadata-metadata" Mar 13 11:16:05.035299 master-0 kubenswrapper[33013]: I0313 11:16:05.035292 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="d753f204-26cd-4edc-944a-724f848ed71b" containerName="nova-metadata-metadata" Mar 13 11:16:05.035403 master-0 kubenswrapper[33013]: E0313 11:16:05.035331 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d753f204-26cd-4edc-944a-724f848ed71b" containerName="nova-metadata-log" Mar 13 11:16:05.035403 master-0 kubenswrapper[33013]: I0313 11:16:05.035339 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="d753f204-26cd-4edc-944a-724f848ed71b" containerName="nova-metadata-log" Mar 13 11:16:05.035605 master-0 kubenswrapper[33013]: I0313 11:16:05.035565 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="d753f204-26cd-4edc-944a-724f848ed71b" containerName="nova-metadata-metadata" Mar 13 11:16:05.035663 master-0 kubenswrapper[33013]: I0313 11:16:05.035621 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="d753f204-26cd-4edc-944a-724f848ed71b" containerName="nova-metadata-log" Mar 13 11:16:05.036895 master-0 kubenswrapper[33013]: I0313 11:16:05.036820 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 11:16:05.039482 master-0 kubenswrapper[33013]: I0313 11:16:05.039458 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 13 11:16:05.039685 master-0 kubenswrapper[33013]: I0313 11:16:05.039667 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 13 11:16:05.082671 master-0 kubenswrapper[33013]: I0313 11:16:05.050088 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:16:05.145850 master-0 kubenswrapper[33013]: I0313 11:16:05.145796 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.146538 master-0 kubenswrapper[33013]: I0313 11:16:05.146469 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.146933 master-0 kubenswrapper[33013]: I0313 11:16:05.146904 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-config-data\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.147430 master-0 kubenswrapper[33013]: I0313 11:16:05.147408 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8tdc\" (UniqueName: \"kubernetes.io/projected/8757d7ed-ae03-4156-b659-9f3099567556-kube-api-access-w8tdc\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.147609 master-0 kubenswrapper[33013]: I0313 11:16:05.147569 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8757d7ed-ae03-4156-b659-9f3099567556-logs\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.249401 master-0 kubenswrapper[33013]: I0313 11:16:05.249276 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.249689 master-0 kubenswrapper[33013]: I0313 11:16:05.249667 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.249865 master-0 kubenswrapper[33013]: I0313 11:16:05.249849 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-config-data\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.249969 master-0 kubenswrapper[33013]: I0313 11:16:05.249956 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8tdc\" (UniqueName: \"kubernetes.io/projected/8757d7ed-ae03-4156-b659-9f3099567556-kube-api-access-w8tdc\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.250070 master-0 kubenswrapper[33013]: I0313 11:16:05.250056 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8757d7ed-ae03-4156-b659-9f3099567556-logs\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.250611 master-0 kubenswrapper[33013]: I0313 11:16:05.250576 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8757d7ed-ae03-4156-b659-9f3099567556-logs\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.254735 master-0 kubenswrapper[33013]: I0313 11:16:05.254406 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-config-data\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.256806 master-0 kubenswrapper[33013]: I0313 11:16:05.256746 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.257255 master-0 kubenswrapper[33013]: I0313 11:16:05.257206 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.266119 master-0 kubenswrapper[33013]: I0313 11:16:05.266085 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8tdc\" (UniqueName: \"kubernetes.io/projected/8757d7ed-ae03-4156-b659-9f3099567556-kube-api-access-w8tdc\") pod \"nova-metadata-0\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " pod="openstack/nova-metadata-0" Mar 13 11:16:05.388298 master-0 kubenswrapper[33013]: I0313 11:16:05.388226 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 11:16:05.913618 master-0 kubenswrapper[33013]: I0313 11:16:05.899220 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:16:06.703000 master-0 kubenswrapper[33013]: I0313 11:16:06.702946 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8757d7ed-ae03-4156-b659-9f3099567556","Type":"ContainerStarted","Data":"1face460b9c467022c0c2bd5c7183f38d69d450173ded840d340f0a729d90365"} Mar 13 11:16:06.703318 master-0 kubenswrapper[33013]: I0313 11:16:06.703302 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8757d7ed-ae03-4156-b659-9f3099567556","Type":"ContainerStarted","Data":"046d22b08bc96d67ea1f8b31db8e80f9b6f32425106f17b7a0076be50f0f704d"} Mar 13 11:16:06.703399 master-0 kubenswrapper[33013]: I0313 11:16:06.703387 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8757d7ed-ae03-4156-b659-9f3099567556","Type":"ContainerStarted","Data":"a9345ebc5f0f5fadec11c1f7fd7ca3ee527403498c26e305a874a5f19ebf4ffe"} Mar 13 11:16:06.736773 master-0 kubenswrapper[33013]: I0313 11:16:06.736669 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d753f204-26cd-4edc-944a-724f848ed71b" path="/var/lib/kubelet/pods/d753f204-26cd-4edc-944a-724f848ed71b/volumes" Mar 13 11:16:06.968844 master-0 kubenswrapper[33013]: I0313 11:16:06.968367 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.968339799 podStartE2EDuration="2.968339799s" podCreationTimestamp="2026-03-13 11:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:16:06.955916323 +0000 UTC m=+1150.431869672" watchObservedRunningTime="2026-03-13 11:16:06.968339799 +0000 UTC m=+1150.444293148" Mar 13 11:16:07.935886 master-0 kubenswrapper[33013]: I0313 11:16:07.935832 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 13 11:16:08.094465 master-0 kubenswrapper[33013]: I0313 11:16:08.094391 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 13 11:16:10.389624 master-0 kubenswrapper[33013]: I0313 11:16:10.389528 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 11:16:10.389624 master-0 kubenswrapper[33013]: I0313 11:16:10.389626 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 11:16:13.095356 master-0 kubenswrapper[33013]: I0313 11:16:13.095280 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 13 11:16:13.138503 master-0 kubenswrapper[33013]: I0313 11:16:13.137996 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 13 11:16:13.162220 master-0 kubenswrapper[33013]: I0313 11:16:13.162164 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 11:16:13.162461 master-0 kubenswrapper[33013]: I0313 11:16:13.162233 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 11:16:13.815025 master-0 kubenswrapper[33013]: I0313 11:16:13.814982 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 13 11:16:14.203281 master-0 kubenswrapper[33013]: I0313 11:16:14.203128 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="25164d15-596b-4a2d-a3f0-f79e373e1956" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.11:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:16:14.243920 master-0 kubenswrapper[33013]: I0313 11:16:14.243848 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="25164d15-596b-4a2d-a3f0-f79e373e1956" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.11:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:16:15.390190 master-0 kubenswrapper[33013]: I0313 11:16:15.389810 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 11:16:15.390190 master-0 kubenswrapper[33013]: I0313 11:16:15.389866 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 11:16:15.663610 master-0 kubenswrapper[33013]: I0313 11:16:15.662732 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:15.812389 master-0 kubenswrapper[33013]: I0313 11:16:15.812249 33013 generic.go:334] "Generic (PLEG): container finished" podID="02632e68-2023-48cd-9770-d99d5a7301a0" containerID="f8ec1ab89021b98a4cd613391869c4c2c762549b26eb641c7b6bc6c703a01ed3" exitCode=137 Mar 13 11:16:15.812389 master-0 kubenswrapper[33013]: I0313 11:16:15.812310 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:15.812942 master-0 kubenswrapper[33013]: I0313 11:16:15.812318 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"02632e68-2023-48cd-9770-d99d5a7301a0","Type":"ContainerDied","Data":"f8ec1ab89021b98a4cd613391869c4c2c762549b26eb641c7b6bc6c703a01ed3"} Mar 13 11:16:15.812942 master-0 kubenswrapper[33013]: I0313 11:16:15.812818 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"02632e68-2023-48cd-9770-d99d5a7301a0","Type":"ContainerDied","Data":"ed124cabaf87d263ecf7dec3b6692b4c30aa96d725b60cc2ccd89456861d1f11"} Mar 13 11:16:15.812942 master-0 kubenswrapper[33013]: I0313 11:16:15.812852 33013 scope.go:117] "RemoveContainer" containerID="f8ec1ab89021b98a4cd613391869c4c2c762549b26eb641c7b6bc6c703a01ed3" Mar 13 11:16:15.840235 master-0 kubenswrapper[33013]: I0313 11:16:15.839945 33013 scope.go:117] "RemoveContainer" containerID="f8ec1ab89021b98a4cd613391869c4c2c762549b26eb641c7b6bc6c703a01ed3" Mar 13 11:16:15.841055 master-0 kubenswrapper[33013]: E0313 11:16:15.841008 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ec1ab89021b98a4cd613391869c4c2c762549b26eb641c7b6bc6c703a01ed3\": container with ID starting with f8ec1ab89021b98a4cd613391869c4c2c762549b26eb641c7b6bc6c703a01ed3 not found: ID does not exist" containerID="f8ec1ab89021b98a4cd613391869c4c2c762549b26eb641c7b6bc6c703a01ed3" Mar 13 11:16:15.841126 master-0 kubenswrapper[33013]: I0313 11:16:15.841075 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ec1ab89021b98a4cd613391869c4c2c762549b26eb641c7b6bc6c703a01ed3"} err="failed to get container status \"f8ec1ab89021b98a4cd613391869c4c2c762549b26eb641c7b6bc6c703a01ed3\": rpc error: code = NotFound desc = could not find container \"f8ec1ab89021b98a4cd613391869c4c2c762549b26eb641c7b6bc6c703a01ed3\": container with ID starting with f8ec1ab89021b98a4cd613391869c4c2c762549b26eb641c7b6bc6c703a01ed3 not found: ID does not exist" Mar 13 11:16:15.850206 master-0 kubenswrapper[33013]: I0313 11:16:15.845349 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6cck\" (UniqueName: \"kubernetes.io/projected/02632e68-2023-48cd-9770-d99d5a7301a0-kube-api-access-k6cck\") pod \"02632e68-2023-48cd-9770-d99d5a7301a0\" (UID: \"02632e68-2023-48cd-9770-d99d5a7301a0\") " Mar 13 11:16:15.850206 master-0 kubenswrapper[33013]: I0313 11:16:15.845660 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02632e68-2023-48cd-9770-d99d5a7301a0-config-data\") pod \"02632e68-2023-48cd-9770-d99d5a7301a0\" (UID: \"02632e68-2023-48cd-9770-d99d5a7301a0\") " Mar 13 11:16:15.850206 master-0 kubenswrapper[33013]: I0313 11:16:15.845863 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02632e68-2023-48cd-9770-d99d5a7301a0-combined-ca-bundle\") pod \"02632e68-2023-48cd-9770-d99d5a7301a0\" (UID: \"02632e68-2023-48cd-9770-d99d5a7301a0\") " Mar 13 11:16:15.853720 master-0 kubenswrapper[33013]: I0313 11:16:15.852802 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02632e68-2023-48cd-9770-d99d5a7301a0-kube-api-access-k6cck" (OuterVolumeSpecName: "kube-api-access-k6cck") pod "02632e68-2023-48cd-9770-d99d5a7301a0" (UID: "02632e68-2023-48cd-9770-d99d5a7301a0"). InnerVolumeSpecName "kube-api-access-k6cck". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:16:15.883431 master-0 kubenswrapper[33013]: I0313 11:16:15.883327 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02632e68-2023-48cd-9770-d99d5a7301a0-config-data" (OuterVolumeSpecName: "config-data") pod "02632e68-2023-48cd-9770-d99d5a7301a0" (UID: "02632e68-2023-48cd-9770-d99d5a7301a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:15.889157 master-0 kubenswrapper[33013]: I0313 11:16:15.888855 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02632e68-2023-48cd-9770-d99d5a7301a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02632e68-2023-48cd-9770-d99d5a7301a0" (UID: "02632e68-2023-48cd-9770-d99d5a7301a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:15.949294 master-0 kubenswrapper[33013]: I0313 11:16:15.949119 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02632e68-2023-48cd-9770-d99d5a7301a0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:15.949294 master-0 kubenswrapper[33013]: I0313 11:16:15.949177 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6cck\" (UniqueName: \"kubernetes.io/projected/02632e68-2023-48cd-9770-d99d5a7301a0-kube-api-access-k6cck\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:15.949294 master-0 kubenswrapper[33013]: I0313 11:16:15.949191 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02632e68-2023-48cd-9770-d99d5a7301a0-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:16.169032 master-0 kubenswrapper[33013]: I0313 11:16:16.168947 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 11:16:16.184428 master-0 kubenswrapper[33013]: I0313 11:16:16.184325 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 11:16:16.209221 master-0 kubenswrapper[33013]: I0313 11:16:16.209073 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 11:16:16.209800 master-0 kubenswrapper[33013]: E0313 11:16:16.209765 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02632e68-2023-48cd-9770-d99d5a7301a0" containerName="nova-cell1-novncproxy-novncproxy" Mar 13 11:16:16.209800 master-0 kubenswrapper[33013]: I0313 11:16:16.209792 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="02632e68-2023-48cd-9770-d99d5a7301a0" containerName="nova-cell1-novncproxy-novncproxy" Mar 13 11:16:16.210200 master-0 kubenswrapper[33013]: I0313 11:16:16.210177 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="02632e68-2023-48cd-9770-d99d5a7301a0" containerName="nova-cell1-novncproxy-novncproxy" Mar 13 11:16:16.211227 master-0 kubenswrapper[33013]: I0313 11:16:16.211193 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.213843 master-0 kubenswrapper[33013]: I0313 11:16:16.213785 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 13 11:16:16.214040 master-0 kubenswrapper[33013]: I0313 11:16:16.213846 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 13 11:16:16.214040 master-0 kubenswrapper[33013]: I0313 11:16:16.213912 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 13 11:16:16.223002 master-0 kubenswrapper[33013]: I0313 11:16:16.222897 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 11:16:16.360962 master-0 kubenswrapper[33013]: I0313 11:16:16.360885 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b638bf6a-78cd-479c-9674-963114eebfd7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.361231 master-0 kubenswrapper[33013]: I0313 11:16:16.361126 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b638bf6a-78cd-479c-9674-963114eebfd7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.361474 master-0 kubenswrapper[33013]: I0313 11:16:16.361424 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b638bf6a-78cd-479c-9674-963114eebfd7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.361857 master-0 kubenswrapper[33013]: I0313 11:16:16.361822 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b638bf6a-78cd-479c-9674-963114eebfd7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.362046 master-0 kubenswrapper[33013]: I0313 11:16:16.361967 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xnrt\" (UniqueName: \"kubernetes.io/projected/b638bf6a-78cd-479c-9674-963114eebfd7-kube-api-access-2xnrt\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.404974 master-0 kubenswrapper[33013]: I0313 11:16:16.404721 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8757d7ed-ae03-4156-b659-9f3099567556" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 11:16:16.412340 master-0 kubenswrapper[33013]: I0313 11:16:16.405040 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8757d7ed-ae03-4156-b659-9f3099567556" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:16:16.464353 master-0 kubenswrapper[33013]: I0313 11:16:16.464166 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b638bf6a-78cd-479c-9674-963114eebfd7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.470723 master-0 kubenswrapper[33013]: I0313 11:16:16.465980 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xnrt\" (UniqueName: \"kubernetes.io/projected/b638bf6a-78cd-479c-9674-963114eebfd7-kube-api-access-2xnrt\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.470723 master-0 kubenswrapper[33013]: I0313 11:16:16.466089 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b638bf6a-78cd-479c-9674-963114eebfd7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.470723 master-0 kubenswrapper[33013]: I0313 11:16:16.466245 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b638bf6a-78cd-479c-9674-963114eebfd7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.470723 master-0 kubenswrapper[33013]: I0313 11:16:16.466301 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b638bf6a-78cd-479c-9674-963114eebfd7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.476127 master-0 kubenswrapper[33013]: I0313 11:16:16.471323 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b638bf6a-78cd-479c-9674-963114eebfd7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.492611 master-0 kubenswrapper[33013]: I0313 11:16:16.487980 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b638bf6a-78cd-479c-9674-963114eebfd7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.492611 master-0 kubenswrapper[33013]: I0313 11:16:16.488285 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b638bf6a-78cd-479c-9674-963114eebfd7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.492611 master-0 kubenswrapper[33013]: I0313 11:16:16.488364 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b638bf6a-78cd-479c-9674-963114eebfd7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.498739 master-0 kubenswrapper[33013]: I0313 11:16:16.497163 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xnrt\" (UniqueName: \"kubernetes.io/projected/b638bf6a-78cd-479c-9674-963114eebfd7-kube-api-access-2xnrt\") pod \"nova-cell1-novncproxy-0\" (UID: \"b638bf6a-78cd-479c-9674-963114eebfd7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.536167 master-0 kubenswrapper[33013]: I0313 11:16:16.536087 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:16.748834 master-0 kubenswrapper[33013]: I0313 11:16:16.748019 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02632e68-2023-48cd-9770-d99d5a7301a0" path="/var/lib/kubelet/pods/02632e68-2023-48cd-9770-d99d5a7301a0/volumes" Mar 13 11:16:17.021228 master-0 kubenswrapper[33013]: I0313 11:16:17.021019 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 13 11:16:17.869440 master-0 kubenswrapper[33013]: I0313 11:16:17.869329 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b638bf6a-78cd-479c-9674-963114eebfd7","Type":"ContainerStarted","Data":"cf0a0f6f45ddcd88db6b2680ee62b516da30525c7956e1266df601c61f0e9e75"} Mar 13 11:16:17.869440 master-0 kubenswrapper[33013]: I0313 11:16:17.869409 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b638bf6a-78cd-479c-9674-963114eebfd7","Type":"ContainerStarted","Data":"4759be31e6359a24e50f36f69152cd339c70319424b16b9237b1e8f3bb2bbd3a"} Mar 13 11:16:17.925890 master-0 kubenswrapper[33013]: I0313 11:16:17.925786 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.9257608240000001 podStartE2EDuration="1.925760824s" podCreationTimestamp="2026-03-13 11:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:16:17.916705005 +0000 UTC m=+1161.392658374" watchObservedRunningTime="2026-03-13 11:16:17.925760824 +0000 UTC m=+1161.401714173" Mar 13 11:16:21.536987 master-0 kubenswrapper[33013]: I0313 11:16:21.536926 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:23.160641 master-0 kubenswrapper[33013]: I0313 11:16:23.160562 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 11:16:23.161806 master-0 kubenswrapper[33013]: I0313 11:16:23.161727 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 11:16:23.163748 master-0 kubenswrapper[33013]: I0313 11:16:23.163699 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 11:16:23.164702 master-0 kubenswrapper[33013]: I0313 11:16:23.164573 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 11:16:23.950845 master-0 kubenswrapper[33013]: I0313 11:16:23.950787 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 11:16:23.954289 master-0 kubenswrapper[33013]: I0313 11:16:23.954245 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 11:16:24.250522 master-0 kubenswrapper[33013]: I0313 11:16:24.247223 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58fdc6f86c-ffrf5"] Mar 13 11:16:24.250522 master-0 kubenswrapper[33013]: I0313 11:16:24.249447 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58fdc6f86c-ffrf5"] Mar 13 11:16:24.250522 master-0 kubenswrapper[33013]: I0313 11:16:24.249541 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.396080 master-0 kubenswrapper[33013]: I0313 11:16:24.396007 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68cwc\" (UniqueName: \"kubernetes.io/projected/533e0624-93b4-4673-949c-b55cb52ea48a-kube-api-access-68cwc\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.396550 master-0 kubenswrapper[33013]: I0313 11:16:24.396460 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-dns-swift-storage-0\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.396740 master-0 kubenswrapper[33013]: I0313 11:16:24.396680 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-ovsdbserver-sb\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.397169 master-0 kubenswrapper[33013]: I0313 11:16:24.396939 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-dns-svc\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.397263 master-0 kubenswrapper[33013]: I0313 11:16:24.397214 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-config\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.397320 master-0 kubenswrapper[33013]: I0313 11:16:24.397291 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-ovsdbserver-nb\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.499052 master-0 kubenswrapper[33013]: I0313 11:16:24.498960 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-dns-svc\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.499421 master-0 kubenswrapper[33013]: I0313 11:16:24.499385 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-config\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.499638 master-0 kubenswrapper[33013]: I0313 11:16:24.499620 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-ovsdbserver-nb\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.499875 master-0 kubenswrapper[33013]: I0313 11:16:24.499860 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68cwc\" (UniqueName: \"kubernetes.io/projected/533e0624-93b4-4673-949c-b55cb52ea48a-kube-api-access-68cwc\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.500180 master-0 kubenswrapper[33013]: I0313 11:16:24.500142 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-dns-svc\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.500259 master-0 kubenswrapper[33013]: I0313 11:16:24.500244 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-dns-swift-storage-0\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.500449 master-0 kubenswrapper[33013]: I0313 11:16:24.500433 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-ovsdbserver-sb\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.500775 master-0 kubenswrapper[33013]: I0313 11:16:24.500708 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-config\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.501094 master-0 kubenswrapper[33013]: I0313 11:16:24.500897 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-ovsdbserver-nb\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.501701 master-0 kubenswrapper[33013]: I0313 11:16:24.501662 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-dns-swift-storage-0\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.501968 master-0 kubenswrapper[33013]: I0313 11:16:24.501934 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/533e0624-93b4-4673-949c-b55cb52ea48a-ovsdbserver-sb\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.518223 master-0 kubenswrapper[33013]: I0313 11:16:24.518176 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68cwc\" (UniqueName: \"kubernetes.io/projected/533e0624-93b4-4673-949c-b55cb52ea48a-kube-api-access-68cwc\") pod \"dnsmasq-dns-58fdc6f86c-ffrf5\" (UID: \"533e0624-93b4-4673-949c-b55cb52ea48a\") " pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:24.595640 master-0 kubenswrapper[33013]: I0313 11:16:24.595569 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:25.179711 master-0 kubenswrapper[33013]: I0313 11:16:25.177752 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58fdc6f86c-ffrf5"] Mar 13 11:16:25.546792 master-0 kubenswrapper[33013]: I0313 11:16:25.546749 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 11:16:25.553947 master-0 kubenswrapper[33013]: I0313 11:16:25.553890 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 11:16:25.555357 master-0 kubenswrapper[33013]: I0313 11:16:25.555331 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 11:16:25.990911 master-0 kubenswrapper[33013]: I0313 11:16:25.990757 33013 generic.go:334] "Generic (PLEG): container finished" podID="533e0624-93b4-4673-949c-b55cb52ea48a" containerID="6b554334c5ba4bfc645788f98faf3f193e5bf27db60d1c471a2faa1e7d7b5c0c" exitCode=0 Mar 13 11:16:25.990911 master-0 kubenswrapper[33013]: I0313 11:16:25.990877 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" event={"ID":"533e0624-93b4-4673-949c-b55cb52ea48a","Type":"ContainerDied","Data":"6b554334c5ba4bfc645788f98faf3f193e5bf27db60d1c471a2faa1e7d7b5c0c"} Mar 13 11:16:25.991189 master-0 kubenswrapper[33013]: I0313 11:16:25.990946 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" event={"ID":"533e0624-93b4-4673-949c-b55cb52ea48a","Type":"ContainerStarted","Data":"ff7b72894b946ae1106b799727c920e5b7c3d4e81f441efaf505843004d7b7e4"} Mar 13 11:16:25.997477 master-0 kubenswrapper[33013]: I0313 11:16:25.997431 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 11:16:26.538330 master-0 kubenswrapper[33013]: I0313 11:16:26.537224 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:26.558398 master-0 kubenswrapper[33013]: I0313 11:16:26.558345 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:27.013609 master-0 kubenswrapper[33013]: I0313 11:16:27.012847 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" event={"ID":"533e0624-93b4-4673-949c-b55cb52ea48a","Type":"ContainerStarted","Data":"da993b86b9d175f1ca8b077fb9764dec34e1b0f6aea13a1c48433ce5eaa662c3"} Mar 13 11:16:27.014381 master-0 kubenswrapper[33013]: I0313 11:16:27.014365 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:27.030437 master-0 kubenswrapper[33013]: I0313 11:16:27.030394 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 13 11:16:27.042853 master-0 kubenswrapper[33013]: I0313 11:16:27.042770 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" podStartSLOduration=3.042745767 podStartE2EDuration="3.042745767s" podCreationTimestamp="2026-03-13 11:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:16:27.040099411 +0000 UTC m=+1170.516052760" watchObservedRunningTime="2026-03-13 11:16:27.042745767 +0000 UTC m=+1170.518699116" Mar 13 11:16:27.441177 master-0 kubenswrapper[33013]: I0313 11:16:27.441108 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-j7f5k"] Mar 13 11:16:27.443448 master-0 kubenswrapper[33013]: I0313 11:16:27.443382 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:27.448776 master-0 kubenswrapper[33013]: I0313 11:16:27.448722 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 13 11:16:27.449021 master-0 kubenswrapper[33013]: I0313 11:16:27.448719 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 13 11:16:27.472313 master-0 kubenswrapper[33013]: I0313 11:16:27.466899 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-p22b4"] Mar 13 11:16:27.472313 master-0 kubenswrapper[33013]: I0313 11:16:27.468819 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:27.492682 master-0 kubenswrapper[33013]: I0313 11:16:27.492366 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-j7f5k"] Mar 13 11:16:27.515450 master-0 kubenswrapper[33013]: I0313 11:16:27.511135 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-p22b4"] Mar 13 11:16:27.549709 master-0 kubenswrapper[33013]: I0313 11:16:27.549545 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-scripts\") pod \"nova-cell1-host-discover-p22b4\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:27.549709 master-0 kubenswrapper[33013]: I0313 11:16:27.549667 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-config-data\") pod \"nova-cell1-cell-mapping-j7f5k\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:27.550036 master-0 kubenswrapper[33013]: I0313 11:16:27.549855 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl9qf\" (UniqueName: \"kubernetes.io/projected/f36fdace-b3b6-4c56-a39b-db6246d57dda-kube-api-access-rl9qf\") pod \"nova-cell1-cell-mapping-j7f5k\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:27.550036 master-0 kubenswrapper[33013]: I0313 11:16:27.549931 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j7f5k\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:27.550036 master-0 kubenswrapper[33013]: I0313 11:16:27.549962 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-combined-ca-bundle\") pod \"nova-cell1-host-discover-p22b4\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:27.550036 master-0 kubenswrapper[33013]: I0313 11:16:27.550002 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkhhq\" (UniqueName: \"kubernetes.io/projected/3a2d04fd-e054-421f-862a-d01159c5c3a2-kube-api-access-bkhhq\") pod \"nova-cell1-host-discover-p22b4\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:27.550244 master-0 kubenswrapper[33013]: I0313 11:16:27.550049 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-config-data\") pod \"nova-cell1-host-discover-p22b4\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:27.550244 master-0 kubenswrapper[33013]: I0313 11:16:27.550116 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-scripts\") pod \"nova-cell1-cell-mapping-j7f5k\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:27.652143 master-0 kubenswrapper[33013]: I0313 11:16:27.652082 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j7f5k\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:27.652143 master-0 kubenswrapper[33013]: I0313 11:16:27.652142 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-combined-ca-bundle\") pod \"nova-cell1-host-discover-p22b4\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:27.652719 master-0 kubenswrapper[33013]: I0313 11:16:27.652323 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkhhq\" (UniqueName: \"kubernetes.io/projected/3a2d04fd-e054-421f-862a-d01159c5c3a2-kube-api-access-bkhhq\") pod \"nova-cell1-host-discover-p22b4\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:27.652719 master-0 kubenswrapper[33013]: I0313 11:16:27.652392 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-config-data\") pod \"nova-cell1-host-discover-p22b4\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:27.652719 master-0 kubenswrapper[33013]: I0313 11:16:27.652464 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-scripts\") pod \"nova-cell1-cell-mapping-j7f5k\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:27.652719 master-0 kubenswrapper[33013]: I0313 11:16:27.652497 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-scripts\") pod \"nova-cell1-host-discover-p22b4\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:27.652719 master-0 kubenswrapper[33013]: I0313 11:16:27.652534 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-config-data\") pod \"nova-cell1-cell-mapping-j7f5k\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:27.652719 master-0 kubenswrapper[33013]: I0313 11:16:27.652620 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl9qf\" (UniqueName: \"kubernetes.io/projected/f36fdace-b3b6-4c56-a39b-db6246d57dda-kube-api-access-rl9qf\") pod \"nova-cell1-cell-mapping-j7f5k\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:27.655676 master-0 kubenswrapper[33013]: I0313 11:16:27.655564 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-combined-ca-bundle\") pod \"nova-cell1-host-discover-p22b4\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:27.655946 master-0 kubenswrapper[33013]: I0313 11:16:27.655915 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j7f5k\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:27.656067 master-0 kubenswrapper[33013]: I0313 11:16:27.656008 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-scripts\") pod \"nova-cell1-cell-mapping-j7f5k\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:27.656695 master-0 kubenswrapper[33013]: I0313 11:16:27.656663 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-config-data\") pod \"nova-cell1-host-discover-p22b4\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:27.656848 master-0 kubenswrapper[33013]: I0313 11:16:27.656827 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-scripts\") pod \"nova-cell1-host-discover-p22b4\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:27.664079 master-0 kubenswrapper[33013]: I0313 11:16:27.664030 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-config-data\") pod \"nova-cell1-cell-mapping-j7f5k\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:27.677573 master-0 kubenswrapper[33013]: I0313 11:16:27.677485 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl9qf\" (UniqueName: \"kubernetes.io/projected/f36fdace-b3b6-4c56-a39b-db6246d57dda-kube-api-access-rl9qf\") pod \"nova-cell1-cell-mapping-j7f5k\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:27.678293 master-0 kubenswrapper[33013]: I0313 11:16:27.678218 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkhhq\" (UniqueName: \"kubernetes.io/projected/3a2d04fd-e054-421f-862a-d01159c5c3a2-kube-api-access-bkhhq\") pod \"nova-cell1-host-discover-p22b4\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:27.818630 master-0 kubenswrapper[33013]: I0313 11:16:27.818478 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:27.833390 master-0 kubenswrapper[33013]: I0313 11:16:27.833328 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:27.855713 master-0 kubenswrapper[33013]: I0313 11:16:27.854999 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:27.855713 master-0 kubenswrapper[33013]: I0313 11:16:27.855300 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="25164d15-596b-4a2d-a3f0-f79e373e1956" containerName="nova-api-log" containerID="cri-o://57cdc257c003163e5b68d0b6efc94de5cc0c5590d597f56c8dc75cad17c8f76d" gracePeriod=30 Mar 13 11:16:27.856276 master-0 kubenswrapper[33013]: I0313 11:16:27.856208 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="25164d15-596b-4a2d-a3f0-f79e373e1956" containerName="nova-api-api" containerID="cri-o://da2c1ed9b97ce45fb1225a7553ff2175fab3715cf21f4f1c4842ee63d2b03981" gracePeriod=30 Mar 13 11:16:28.033539 master-0 kubenswrapper[33013]: I0313 11:16:28.033446 33013 generic.go:334] "Generic (PLEG): container finished" podID="25164d15-596b-4a2d-a3f0-f79e373e1956" containerID="57cdc257c003163e5b68d0b6efc94de5cc0c5590d597f56c8dc75cad17c8f76d" exitCode=143 Mar 13 11:16:28.035492 master-0 kubenswrapper[33013]: I0313 11:16:28.035289 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25164d15-596b-4a2d-a3f0-f79e373e1956","Type":"ContainerDied","Data":"57cdc257c003163e5b68d0b6efc94de5cc0c5590d597f56c8dc75cad17c8f76d"} Mar 13 11:16:28.389651 master-0 kubenswrapper[33013]: I0313 11:16:28.388891 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-j7f5k"] Mar 13 11:16:28.404885 master-0 kubenswrapper[33013]: W0313 11:16:28.404807 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf36fdace_b3b6_4c56_a39b_db6246d57dda.slice/crio-c4377e9bb59d52773d9fcc708412ed86efc095e30da160e7be456da8fdddb6e6 WatchSource:0}: Error finding container c4377e9bb59d52773d9fcc708412ed86efc095e30da160e7be456da8fdddb6e6: Status 404 returned error can't find the container with id c4377e9bb59d52773d9fcc708412ed86efc095e30da160e7be456da8fdddb6e6 Mar 13 11:16:28.514012 master-0 kubenswrapper[33013]: W0313 11:16:28.513954 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a2d04fd_e054_421f_862a_d01159c5c3a2.slice/crio-086eb924063b1dc61d38bb090f625d957fd0e0c545f854f4ba23addca956e377 WatchSource:0}: Error finding container 086eb924063b1dc61d38bb090f625d957fd0e0c545f854f4ba23addca956e377: Status 404 returned error can't find the container with id 086eb924063b1dc61d38bb090f625d957fd0e0c545f854f4ba23addca956e377 Mar 13 11:16:28.515510 master-0 kubenswrapper[33013]: I0313 11:16:28.515445 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-p22b4"] Mar 13 11:16:29.048904 master-0 kubenswrapper[33013]: I0313 11:16:29.048841 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j7f5k" event={"ID":"f36fdace-b3b6-4c56-a39b-db6246d57dda","Type":"ContainerStarted","Data":"b3172bef34f21072d332e6e6f45054de6730f1a2da723dd3abebe62aaf8eea2c"} Mar 13 11:16:29.049521 master-0 kubenswrapper[33013]: I0313 11:16:29.048910 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j7f5k" event={"ID":"f36fdace-b3b6-4c56-a39b-db6246d57dda","Type":"ContainerStarted","Data":"c4377e9bb59d52773d9fcc708412ed86efc095e30da160e7be456da8fdddb6e6"} Mar 13 11:16:29.052981 master-0 kubenswrapper[33013]: I0313 11:16:29.052932 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-p22b4" event={"ID":"3a2d04fd-e054-421f-862a-d01159c5c3a2","Type":"ContainerStarted","Data":"bab484a8c6ce82831fd4102eeceaf5f5f84fc3322bd904e8b2cdf91ae430f371"} Mar 13 11:16:29.052981 master-0 kubenswrapper[33013]: I0313 11:16:29.052977 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-p22b4" event={"ID":"3a2d04fd-e054-421f-862a-d01159c5c3a2","Type":"ContainerStarted","Data":"086eb924063b1dc61d38bb090f625d957fd0e0c545f854f4ba23addca956e377"} Mar 13 11:16:29.081458 master-0 kubenswrapper[33013]: I0313 11:16:29.081364 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-j7f5k" podStartSLOduration=2.08134054 podStartE2EDuration="2.08134054s" podCreationTimestamp="2026-03-13 11:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:16:29.069289465 +0000 UTC m=+1172.545242814" watchObservedRunningTime="2026-03-13 11:16:29.08134054 +0000 UTC m=+1172.557293889" Mar 13 11:16:31.688490 master-0 kubenswrapper[33013]: I0313 11:16:31.688437 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 11:16:31.730808 master-0 kubenswrapper[33013]: I0313 11:16:31.730605 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-p22b4" podStartSLOduration=4.730558254 podStartE2EDuration="4.730558254s" podCreationTimestamp="2026-03-13 11:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:16:29.102024692 +0000 UTC m=+1172.577978061" watchObservedRunningTime="2026-03-13 11:16:31.730558254 +0000 UTC m=+1175.206511603" Mar 13 11:16:31.775875 master-0 kubenswrapper[33013]: I0313 11:16:31.775809 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25164d15-596b-4a2d-a3f0-f79e373e1956-logs\") pod \"25164d15-596b-4a2d-a3f0-f79e373e1956\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " Mar 13 11:16:31.776184 master-0 kubenswrapper[33013]: I0313 11:16:31.776153 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25164d15-596b-4a2d-a3f0-f79e373e1956-config-data\") pod \"25164d15-596b-4a2d-a3f0-f79e373e1956\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " Mar 13 11:16:31.776842 master-0 kubenswrapper[33013]: I0313 11:16:31.776770 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twpx9\" (UniqueName: \"kubernetes.io/projected/25164d15-596b-4a2d-a3f0-f79e373e1956-kube-api-access-twpx9\") pod \"25164d15-596b-4a2d-a3f0-f79e373e1956\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " Mar 13 11:16:31.777001 master-0 kubenswrapper[33013]: I0313 11:16:31.776967 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25164d15-596b-4a2d-a3f0-f79e373e1956-combined-ca-bundle\") pod \"25164d15-596b-4a2d-a3f0-f79e373e1956\" (UID: \"25164d15-596b-4a2d-a3f0-f79e373e1956\") " Mar 13 11:16:31.786041 master-0 kubenswrapper[33013]: I0313 11:16:31.785234 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25164d15-596b-4a2d-a3f0-f79e373e1956-logs" (OuterVolumeSpecName: "logs") pod "25164d15-596b-4a2d-a3f0-f79e373e1956" (UID: "25164d15-596b-4a2d-a3f0-f79e373e1956"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:16:31.795122 master-0 kubenswrapper[33013]: I0313 11:16:31.795057 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25164d15-596b-4a2d-a3f0-f79e373e1956-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:31.796312 master-0 kubenswrapper[33013]: I0313 11:16:31.796240 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25164d15-596b-4a2d-a3f0-f79e373e1956-kube-api-access-twpx9" (OuterVolumeSpecName: "kube-api-access-twpx9") pod "25164d15-596b-4a2d-a3f0-f79e373e1956" (UID: "25164d15-596b-4a2d-a3f0-f79e373e1956"). InnerVolumeSpecName "kube-api-access-twpx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:16:31.852571 master-0 kubenswrapper[33013]: I0313 11:16:31.850939 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25164d15-596b-4a2d-a3f0-f79e373e1956-config-data" (OuterVolumeSpecName: "config-data") pod "25164d15-596b-4a2d-a3f0-f79e373e1956" (UID: "25164d15-596b-4a2d-a3f0-f79e373e1956"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:31.859428 master-0 kubenswrapper[33013]: I0313 11:16:31.859335 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25164d15-596b-4a2d-a3f0-f79e373e1956-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25164d15-596b-4a2d-a3f0-f79e373e1956" (UID: "25164d15-596b-4a2d-a3f0-f79e373e1956"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:31.897841 master-0 kubenswrapper[33013]: I0313 11:16:31.897741 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25164d15-596b-4a2d-a3f0-f79e373e1956-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:31.897841 master-0 kubenswrapper[33013]: I0313 11:16:31.897793 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25164d15-596b-4a2d-a3f0-f79e373e1956-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:31.897841 master-0 kubenswrapper[33013]: I0313 11:16:31.897808 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twpx9\" (UniqueName: \"kubernetes.io/projected/25164d15-596b-4a2d-a3f0-f79e373e1956-kube-api-access-twpx9\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:32.095922 master-0 kubenswrapper[33013]: I0313 11:16:32.095854 33013 generic.go:334] "Generic (PLEG): container finished" podID="25164d15-596b-4a2d-a3f0-f79e373e1956" containerID="da2c1ed9b97ce45fb1225a7553ff2175fab3715cf21f4f1c4842ee63d2b03981" exitCode=0 Mar 13 11:16:32.096209 master-0 kubenswrapper[33013]: I0313 11:16:32.095941 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25164d15-596b-4a2d-a3f0-f79e373e1956","Type":"ContainerDied","Data":"da2c1ed9b97ce45fb1225a7553ff2175fab3715cf21f4f1c4842ee63d2b03981"} Mar 13 11:16:32.096209 master-0 kubenswrapper[33013]: I0313 11:16:32.095978 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25164d15-596b-4a2d-a3f0-f79e373e1956","Type":"ContainerDied","Data":"d034e3aee59dca3945bfeb38217d39f355c6752bbde65336d5ef838ed5783837"} Mar 13 11:16:32.096209 master-0 kubenswrapper[33013]: I0313 11:16:32.096003 33013 scope.go:117] "RemoveContainer" containerID="da2c1ed9b97ce45fb1225a7553ff2175fab3715cf21f4f1c4842ee63d2b03981" Mar 13 11:16:32.096209 master-0 kubenswrapper[33013]: I0313 11:16:32.096172 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 11:16:32.101922 master-0 kubenswrapper[33013]: I0313 11:16:32.101877 33013 generic.go:334] "Generic (PLEG): container finished" podID="3a2d04fd-e054-421f-862a-d01159c5c3a2" containerID="bab484a8c6ce82831fd4102eeceaf5f5f84fc3322bd904e8b2cdf91ae430f371" exitCode=0 Mar 13 11:16:32.102069 master-0 kubenswrapper[33013]: I0313 11:16:32.101936 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-p22b4" event={"ID":"3a2d04fd-e054-421f-862a-d01159c5c3a2","Type":"ContainerDied","Data":"bab484a8c6ce82831fd4102eeceaf5f5f84fc3322bd904e8b2cdf91ae430f371"} Mar 13 11:16:32.124542 master-0 kubenswrapper[33013]: I0313 11:16:32.122419 33013 scope.go:117] "RemoveContainer" containerID="57cdc257c003163e5b68d0b6efc94de5cc0c5590d597f56c8dc75cad17c8f76d" Mar 13 11:16:32.157720 master-0 kubenswrapper[33013]: I0313 11:16:32.153550 33013 scope.go:117] "RemoveContainer" containerID="da2c1ed9b97ce45fb1225a7553ff2175fab3715cf21f4f1c4842ee63d2b03981" Mar 13 11:16:32.157720 master-0 kubenswrapper[33013]: E0313 11:16:32.154182 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da2c1ed9b97ce45fb1225a7553ff2175fab3715cf21f4f1c4842ee63d2b03981\": container with ID starting with da2c1ed9b97ce45fb1225a7553ff2175fab3715cf21f4f1c4842ee63d2b03981 not found: ID does not exist" containerID="da2c1ed9b97ce45fb1225a7553ff2175fab3715cf21f4f1c4842ee63d2b03981" Mar 13 11:16:32.157720 master-0 kubenswrapper[33013]: I0313 11:16:32.154238 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da2c1ed9b97ce45fb1225a7553ff2175fab3715cf21f4f1c4842ee63d2b03981"} err="failed to get container status \"da2c1ed9b97ce45fb1225a7553ff2175fab3715cf21f4f1c4842ee63d2b03981\": rpc error: code = NotFound desc = could not find container \"da2c1ed9b97ce45fb1225a7553ff2175fab3715cf21f4f1c4842ee63d2b03981\": container with ID starting with da2c1ed9b97ce45fb1225a7553ff2175fab3715cf21f4f1c4842ee63d2b03981 not found: ID does not exist" Mar 13 11:16:32.157720 master-0 kubenswrapper[33013]: I0313 11:16:32.154273 33013 scope.go:117] "RemoveContainer" containerID="57cdc257c003163e5b68d0b6efc94de5cc0c5590d597f56c8dc75cad17c8f76d" Mar 13 11:16:32.168506 master-0 kubenswrapper[33013]: E0313 11:16:32.165397 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57cdc257c003163e5b68d0b6efc94de5cc0c5590d597f56c8dc75cad17c8f76d\": container with ID starting with 57cdc257c003163e5b68d0b6efc94de5cc0c5590d597f56c8dc75cad17c8f76d not found: ID does not exist" containerID="57cdc257c003163e5b68d0b6efc94de5cc0c5590d597f56c8dc75cad17c8f76d" Mar 13 11:16:32.168506 master-0 kubenswrapper[33013]: I0313 11:16:32.165459 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57cdc257c003163e5b68d0b6efc94de5cc0c5590d597f56c8dc75cad17c8f76d"} err="failed to get container status \"57cdc257c003163e5b68d0b6efc94de5cc0c5590d597f56c8dc75cad17c8f76d\": rpc error: code = NotFound desc = could not find container \"57cdc257c003163e5b68d0b6efc94de5cc0c5590d597f56c8dc75cad17c8f76d\": container with ID starting with 57cdc257c003163e5b68d0b6efc94de5cc0c5590d597f56c8dc75cad17c8f76d not found: ID does not exist" Mar 13 11:16:32.176413 master-0 kubenswrapper[33013]: I0313 11:16:32.175811 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:32.197185 master-0 kubenswrapper[33013]: I0313 11:16:32.196117 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:32.213578 master-0 kubenswrapper[33013]: I0313 11:16:32.212510 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:32.218373 master-0 kubenswrapper[33013]: E0313 11:16:32.218319 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25164d15-596b-4a2d-a3f0-f79e373e1956" containerName="nova-api-log" Mar 13 11:16:32.218491 master-0 kubenswrapper[33013]: I0313 11:16:32.218479 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="25164d15-596b-4a2d-a3f0-f79e373e1956" containerName="nova-api-log" Mar 13 11:16:32.218614 master-0 kubenswrapper[33013]: E0313 11:16:32.218598 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25164d15-596b-4a2d-a3f0-f79e373e1956" containerName="nova-api-api" Mar 13 11:16:32.219113 master-0 kubenswrapper[33013]: I0313 11:16:32.218675 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="25164d15-596b-4a2d-a3f0-f79e373e1956" containerName="nova-api-api" Mar 13 11:16:32.226094 master-0 kubenswrapper[33013]: I0313 11:16:32.226075 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="25164d15-596b-4a2d-a3f0-f79e373e1956" containerName="nova-api-log" Mar 13 11:16:32.226277 master-0 kubenswrapper[33013]: I0313 11:16:32.226265 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="25164d15-596b-4a2d-a3f0-f79e373e1956" containerName="nova-api-api" Mar 13 11:16:32.243436 master-0 kubenswrapper[33013]: I0313 11:16:32.243296 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 11:16:32.246364 master-0 kubenswrapper[33013]: I0313 11:16:32.246120 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 11:16:32.246633 master-0 kubenswrapper[33013]: I0313 11:16:32.246601 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 13 11:16:32.248603 master-0 kubenswrapper[33013]: I0313 11:16:32.246995 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 13 11:16:32.275031 master-0 kubenswrapper[33013]: I0313 11:16:32.274942 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:32.310725 master-0 kubenswrapper[33013]: I0313 11:16:32.310652 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-config-data\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.311082 master-0 kubenswrapper[33013]: I0313 11:16:32.311061 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g5b9\" (UniqueName: \"kubernetes.io/projected/cbb109df-3f48-4e58-bd8f-70753cdbd68d-kube-api-access-8g5b9\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.311335 master-0 kubenswrapper[33013]: I0313 11:16:32.311238 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbb109df-3f48-4e58-bd8f-70753cdbd68d-logs\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.311494 master-0 kubenswrapper[33013]: I0313 11:16:32.311479 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.311603 master-0 kubenswrapper[33013]: I0313 11:16:32.311570 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.311845 master-0 kubenswrapper[33013]: I0313 11:16:32.311775 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-public-tls-certs\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.414611 master-0 kubenswrapper[33013]: I0313 11:16:32.414436 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-config-data\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.414611 master-0 kubenswrapper[33013]: I0313 11:16:32.414539 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g5b9\" (UniqueName: \"kubernetes.io/projected/cbb109df-3f48-4e58-bd8f-70753cdbd68d-kube-api-access-8g5b9\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.414611 master-0 kubenswrapper[33013]: I0313 11:16:32.414569 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbb109df-3f48-4e58-bd8f-70753cdbd68d-logs\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.414993 master-0 kubenswrapper[33013]: I0313 11:16:32.414663 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.414993 master-0 kubenswrapper[33013]: I0313 11:16:32.414690 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.414993 master-0 kubenswrapper[33013]: I0313 11:16:32.414707 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-public-tls-certs\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.415469 master-0 kubenswrapper[33013]: I0313 11:16:32.415422 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbb109df-3f48-4e58-bd8f-70753cdbd68d-logs\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.418800 master-0 kubenswrapper[33013]: I0313 11:16:32.418751 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-config-data\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.418885 master-0 kubenswrapper[33013]: I0313 11:16:32.418807 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-public-tls-certs\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.419395 master-0 kubenswrapper[33013]: I0313 11:16:32.419348 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.420523 master-0 kubenswrapper[33013]: I0313 11:16:32.420485 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.435116 master-0 kubenswrapper[33013]: I0313 11:16:32.435076 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g5b9\" (UniqueName: \"kubernetes.io/projected/cbb109df-3f48-4e58-bd8f-70753cdbd68d-kube-api-access-8g5b9\") pod \"nova-api-0\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " pod="openstack/nova-api-0" Mar 13 11:16:32.589901 master-0 kubenswrapper[33013]: I0313 11:16:32.589831 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 11:16:32.751717 master-0 kubenswrapper[33013]: I0313 11:16:32.747775 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25164d15-596b-4a2d-a3f0-f79e373e1956" path="/var/lib/kubelet/pods/25164d15-596b-4a2d-a3f0-f79e373e1956/volumes" Mar 13 11:16:33.090920 master-0 kubenswrapper[33013]: W0313 11:16:33.089115 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbb109df_3f48_4e58_bd8f_70753cdbd68d.slice/crio-92a38206875768d6071c51a5a9471462090bc4d47f85894257627dfba4030e41 WatchSource:0}: Error finding container 92a38206875768d6071c51a5a9471462090bc4d47f85894257627dfba4030e41: Status 404 returned error can't find the container with id 92a38206875768d6071c51a5a9471462090bc4d47f85894257627dfba4030e41 Mar 13 11:16:33.101693 master-0 kubenswrapper[33013]: I0313 11:16:33.101631 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:33.132993 master-0 kubenswrapper[33013]: I0313 11:16:33.131009 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbb109df-3f48-4e58-bd8f-70753cdbd68d","Type":"ContainerStarted","Data":"92a38206875768d6071c51a5a9471462090bc4d47f85894257627dfba4030e41"} Mar 13 11:16:33.608441 master-0 kubenswrapper[33013]: I0313 11:16:33.608239 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:33.660696 master-0 kubenswrapper[33013]: I0313 11:16:33.660386 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkhhq\" (UniqueName: \"kubernetes.io/projected/3a2d04fd-e054-421f-862a-d01159c5c3a2-kube-api-access-bkhhq\") pod \"3a2d04fd-e054-421f-862a-d01159c5c3a2\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " Mar 13 11:16:33.660696 master-0 kubenswrapper[33013]: I0313 11:16:33.660522 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-combined-ca-bundle\") pod \"3a2d04fd-e054-421f-862a-d01159c5c3a2\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " Mar 13 11:16:33.660696 master-0 kubenswrapper[33013]: I0313 11:16:33.660648 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-config-data\") pod \"3a2d04fd-e054-421f-862a-d01159c5c3a2\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " Mar 13 11:16:33.661008 master-0 kubenswrapper[33013]: I0313 11:16:33.660771 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-scripts\") pod \"3a2d04fd-e054-421f-862a-d01159c5c3a2\" (UID: \"3a2d04fd-e054-421f-862a-d01159c5c3a2\") " Mar 13 11:16:33.667521 master-0 kubenswrapper[33013]: I0313 11:16:33.667449 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a2d04fd-e054-421f-862a-d01159c5c3a2-kube-api-access-bkhhq" (OuterVolumeSpecName: "kube-api-access-bkhhq") pod "3a2d04fd-e054-421f-862a-d01159c5c3a2" (UID: "3a2d04fd-e054-421f-862a-d01159c5c3a2"). InnerVolumeSpecName "kube-api-access-bkhhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:16:33.668628 master-0 kubenswrapper[33013]: I0313 11:16:33.668474 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-scripts" (OuterVolumeSpecName: "scripts") pod "3a2d04fd-e054-421f-862a-d01159c5c3a2" (UID: "3a2d04fd-e054-421f-862a-d01159c5c3a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:33.743427 master-0 kubenswrapper[33013]: I0313 11:16:33.743365 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-config-data" (OuterVolumeSpecName: "config-data") pod "3a2d04fd-e054-421f-862a-d01159c5c3a2" (UID: "3a2d04fd-e054-421f-862a-d01159c5c3a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:33.743968 master-0 kubenswrapper[33013]: I0313 11:16:33.743916 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a2d04fd-e054-421f-862a-d01159c5c3a2" (UID: "3a2d04fd-e054-421f-862a-d01159c5c3a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:33.763962 master-0 kubenswrapper[33013]: I0313 11:16:33.763808 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkhhq\" (UniqueName: \"kubernetes.io/projected/3a2d04fd-e054-421f-862a-d01159c5c3a2-kube-api-access-bkhhq\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:33.763962 master-0 kubenswrapper[33013]: I0313 11:16:33.763891 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:33.763962 master-0 kubenswrapper[33013]: I0313 11:16:33.763901 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:33.763962 master-0 kubenswrapper[33013]: I0313 11:16:33.763913 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a2d04fd-e054-421f-862a-d01159c5c3a2-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:34.156713 master-0 kubenswrapper[33013]: I0313 11:16:34.156668 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-p22b4" event={"ID":"3a2d04fd-e054-421f-862a-d01159c5c3a2","Type":"ContainerDied","Data":"086eb924063b1dc61d38bb090f625d957fd0e0c545f854f4ba23addca956e377"} Mar 13 11:16:34.156967 master-0 kubenswrapper[33013]: I0313 11:16:34.156950 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="086eb924063b1dc61d38bb090f625d957fd0e0c545f854f4ba23addca956e377" Mar 13 11:16:34.157068 master-0 kubenswrapper[33013]: I0313 11:16:34.156690 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-p22b4" Mar 13 11:16:34.166712 master-0 kubenswrapper[33013]: I0313 11:16:34.164939 33013 generic.go:334] "Generic (PLEG): container finished" podID="f36fdace-b3b6-4c56-a39b-db6246d57dda" containerID="b3172bef34f21072d332e6e6f45054de6730f1a2da723dd3abebe62aaf8eea2c" exitCode=0 Mar 13 11:16:34.166712 master-0 kubenswrapper[33013]: I0313 11:16:34.165061 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j7f5k" event={"ID":"f36fdace-b3b6-4c56-a39b-db6246d57dda","Type":"ContainerDied","Data":"b3172bef34f21072d332e6e6f45054de6730f1a2da723dd3abebe62aaf8eea2c"} Mar 13 11:16:34.168661 master-0 kubenswrapper[33013]: I0313 11:16:34.168312 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbb109df-3f48-4e58-bd8f-70753cdbd68d","Type":"ContainerStarted","Data":"1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c"} Mar 13 11:16:34.168661 master-0 kubenswrapper[33013]: I0313 11:16:34.168376 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbb109df-3f48-4e58-bd8f-70753cdbd68d","Type":"ContainerStarted","Data":"4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f"} Mar 13 11:16:34.227810 master-0 kubenswrapper[33013]: I0313 11:16:34.227637 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.227616635 podStartE2EDuration="2.227616635s" podCreationTimestamp="2026-03-13 11:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:16:34.21872267 +0000 UTC m=+1177.694676019" watchObservedRunningTime="2026-03-13 11:16:34.227616635 +0000 UTC m=+1177.703569984" Mar 13 11:16:34.597684 master-0 kubenswrapper[33013]: I0313 11:16:34.597569 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58fdc6f86c-ffrf5" Mar 13 11:16:34.806896 master-0 kubenswrapper[33013]: I0313 11:16:34.806823 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9c9ccb7c-qhldm"] Mar 13 11:16:34.807474 master-0 kubenswrapper[33013]: I0313 11:16:34.807169 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" podUID="16b462d2-f716-400e-9ff5-51f843fbc2e9" containerName="dnsmasq-dns" containerID="cri-o://599f42b5b1bc146c48ece4cf639cfa898ee1803a53d5dfa04f6f3511f0df2dd1" gracePeriod=10 Mar 13 11:16:35.225217 master-0 kubenswrapper[33013]: I0313 11:16:35.224934 33013 generic.go:334] "Generic (PLEG): container finished" podID="16b462d2-f716-400e-9ff5-51f843fbc2e9" containerID="599f42b5b1bc146c48ece4cf639cfa898ee1803a53d5dfa04f6f3511f0df2dd1" exitCode=0 Mar 13 11:16:35.226794 master-0 kubenswrapper[33013]: I0313 11:16:35.226724 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" event={"ID":"16b462d2-f716-400e-9ff5-51f843fbc2e9","Type":"ContainerDied","Data":"599f42b5b1bc146c48ece4cf639cfa898ee1803a53d5dfa04f6f3511f0df2dd1"} Mar 13 11:16:35.460694 master-0 kubenswrapper[33013]: I0313 11:16:35.460648 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:16:35.511290 master-0 kubenswrapper[33013]: I0313 11:16:35.511169 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-ovsdbserver-sb\") pod \"16b462d2-f716-400e-9ff5-51f843fbc2e9\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " Mar 13 11:16:35.511633 master-0 kubenswrapper[33013]: I0313 11:16:35.511616 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-dns-swift-storage-0\") pod \"16b462d2-f716-400e-9ff5-51f843fbc2e9\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " Mar 13 11:16:35.511820 master-0 kubenswrapper[33013]: I0313 11:16:35.511805 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-dns-svc\") pod \"16b462d2-f716-400e-9ff5-51f843fbc2e9\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " Mar 13 11:16:35.511911 master-0 kubenswrapper[33013]: I0313 11:16:35.511899 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-config\") pod \"16b462d2-f716-400e-9ff5-51f843fbc2e9\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " Mar 13 11:16:35.512078 master-0 kubenswrapper[33013]: I0313 11:16:35.512061 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-ovsdbserver-nb\") pod \"16b462d2-f716-400e-9ff5-51f843fbc2e9\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " Mar 13 11:16:35.512571 master-0 kubenswrapper[33013]: I0313 11:16:35.512551 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv6x8\" (UniqueName: \"kubernetes.io/projected/16b462d2-f716-400e-9ff5-51f843fbc2e9-kube-api-access-fv6x8\") pod \"16b462d2-f716-400e-9ff5-51f843fbc2e9\" (UID: \"16b462d2-f716-400e-9ff5-51f843fbc2e9\") " Mar 13 11:16:35.516695 master-0 kubenswrapper[33013]: I0313 11:16:35.516651 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16b462d2-f716-400e-9ff5-51f843fbc2e9-kube-api-access-fv6x8" (OuterVolumeSpecName: "kube-api-access-fv6x8") pod "16b462d2-f716-400e-9ff5-51f843fbc2e9" (UID: "16b462d2-f716-400e-9ff5-51f843fbc2e9"). InnerVolumeSpecName "kube-api-access-fv6x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:16:35.610259 master-0 kubenswrapper[33013]: I0313 11:16:35.610145 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "16b462d2-f716-400e-9ff5-51f843fbc2e9" (UID: "16b462d2-f716-400e-9ff5-51f843fbc2e9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:16:35.620542 master-0 kubenswrapper[33013]: I0313 11:16:35.620184 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:35.620542 master-0 kubenswrapper[33013]: I0313 11:16:35.620263 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv6x8\" (UniqueName: \"kubernetes.io/projected/16b462d2-f716-400e-9ff5-51f843fbc2e9-kube-api-access-fv6x8\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:35.626208 master-0 kubenswrapper[33013]: I0313 11:16:35.626115 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16b462d2-f716-400e-9ff5-51f843fbc2e9" (UID: "16b462d2-f716-400e-9ff5-51f843fbc2e9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:16:35.632720 master-0 kubenswrapper[33013]: I0313 11:16:35.632552 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "16b462d2-f716-400e-9ff5-51f843fbc2e9" (UID: "16b462d2-f716-400e-9ff5-51f843fbc2e9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:16:35.649338 master-0 kubenswrapper[33013]: I0313 11:16:35.649234 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "16b462d2-f716-400e-9ff5-51f843fbc2e9" (UID: "16b462d2-f716-400e-9ff5-51f843fbc2e9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:16:35.651837 master-0 kubenswrapper[33013]: I0313 11:16:35.651423 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-config" (OuterVolumeSpecName: "config") pod "16b462d2-f716-400e-9ff5-51f843fbc2e9" (UID: "16b462d2-f716-400e-9ff5-51f843fbc2e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:16:35.693666 master-0 kubenswrapper[33013]: I0313 11:16:35.693092 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:35.724319 master-0 kubenswrapper[33013]: I0313 11:16:35.724253 33013 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:35.724319 master-0 kubenswrapper[33013]: I0313 11:16:35.724312 33013 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:35.724319 master-0 kubenswrapper[33013]: I0313 11:16:35.724328 33013 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:35.724702 master-0 kubenswrapper[33013]: I0313 11:16:35.724340 33013 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16b462d2-f716-400e-9ff5-51f843fbc2e9-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:35.825617 master-0 kubenswrapper[33013]: I0313 11:16:35.825541 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl9qf\" (UniqueName: \"kubernetes.io/projected/f36fdace-b3b6-4c56-a39b-db6246d57dda-kube-api-access-rl9qf\") pod \"f36fdace-b3b6-4c56-a39b-db6246d57dda\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " Mar 13 11:16:35.826282 master-0 kubenswrapper[33013]: I0313 11:16:35.825752 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-scripts\") pod \"f36fdace-b3b6-4c56-a39b-db6246d57dda\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " Mar 13 11:16:35.826282 master-0 kubenswrapper[33013]: I0313 11:16:35.825839 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-combined-ca-bundle\") pod \"f36fdace-b3b6-4c56-a39b-db6246d57dda\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " Mar 13 11:16:35.826282 master-0 kubenswrapper[33013]: I0313 11:16:35.826010 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-config-data\") pod \"f36fdace-b3b6-4c56-a39b-db6246d57dda\" (UID: \"f36fdace-b3b6-4c56-a39b-db6246d57dda\") " Mar 13 11:16:35.831616 master-0 kubenswrapper[33013]: I0313 11:16:35.829845 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f36fdace-b3b6-4c56-a39b-db6246d57dda-kube-api-access-rl9qf" (OuterVolumeSpecName: "kube-api-access-rl9qf") pod "f36fdace-b3b6-4c56-a39b-db6246d57dda" (UID: "f36fdace-b3b6-4c56-a39b-db6246d57dda"). InnerVolumeSpecName "kube-api-access-rl9qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:16:35.831616 master-0 kubenswrapper[33013]: I0313 11:16:35.830797 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-scripts" (OuterVolumeSpecName: "scripts") pod "f36fdace-b3b6-4c56-a39b-db6246d57dda" (UID: "f36fdace-b3b6-4c56-a39b-db6246d57dda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:35.867541 master-0 kubenswrapper[33013]: I0313 11:16:35.867431 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-config-data" (OuterVolumeSpecName: "config-data") pod "f36fdace-b3b6-4c56-a39b-db6246d57dda" (UID: "f36fdace-b3b6-4c56-a39b-db6246d57dda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:35.870288 master-0 kubenswrapper[33013]: I0313 11:16:35.870219 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f36fdace-b3b6-4c56-a39b-db6246d57dda" (UID: "f36fdace-b3b6-4c56-a39b-db6246d57dda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:35.929574 master-0 kubenswrapper[33013]: I0313 11:16:35.929529 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl9qf\" (UniqueName: \"kubernetes.io/projected/f36fdace-b3b6-4c56-a39b-db6246d57dda-kube-api-access-rl9qf\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:35.929847 master-0 kubenswrapper[33013]: I0313 11:16:35.929835 33013 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-scripts\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:35.929916 master-0 kubenswrapper[33013]: I0313 11:16:35.929906 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:35.929984 master-0 kubenswrapper[33013]: I0313 11:16:35.929974 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36fdace-b3b6-4c56-a39b-db6246d57dda-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:36.269024 master-0 kubenswrapper[33013]: I0313 11:16:36.267990 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" event={"ID":"16b462d2-f716-400e-9ff5-51f843fbc2e9","Type":"ContainerDied","Data":"8f9c78e39adffaba31b028bb9f108fbcc7a3bd7f72e92ea4a274b7b81638157d"} Mar 13 11:16:36.269024 master-0 kubenswrapper[33013]: I0313 11:16:36.268061 33013 scope.go:117] "RemoveContainer" containerID="599f42b5b1bc146c48ece4cf639cfa898ee1803a53d5dfa04f6f3511f0df2dd1" Mar 13 11:16:36.269024 master-0 kubenswrapper[33013]: I0313 11:16:36.268216 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9c9ccb7c-qhldm" Mar 13 11:16:36.273156 master-0 kubenswrapper[33013]: I0313 11:16:36.273026 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j7f5k" event={"ID":"f36fdace-b3b6-4c56-a39b-db6246d57dda","Type":"ContainerDied","Data":"c4377e9bb59d52773d9fcc708412ed86efc095e30da160e7be456da8fdddb6e6"} Mar 13 11:16:36.273245 master-0 kubenswrapper[33013]: I0313 11:16:36.273175 33013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4377e9bb59d52773d9fcc708412ed86efc095e30da160e7be456da8fdddb6e6" Mar 13 11:16:36.273294 master-0 kubenswrapper[33013]: I0313 11:16:36.273270 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j7f5k" Mar 13 11:16:36.301256 master-0 kubenswrapper[33013]: I0313 11:16:36.298516 33013 scope.go:117] "RemoveContainer" containerID="cb92b31e4390fda0c0bd7b89bbc9e98d4558692cc96f087be1ab3043594730d1" Mar 13 11:16:36.319912 master-0 kubenswrapper[33013]: I0313 11:16:36.319861 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9c9ccb7c-qhldm"] Mar 13 11:16:36.332486 master-0 kubenswrapper[33013]: I0313 11:16:36.332406 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9c9ccb7c-qhldm"] Mar 13 11:16:36.426443 master-0 kubenswrapper[33013]: I0313 11:16:36.426372 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:36.426742 master-0 kubenswrapper[33013]: I0313 11:16:36.426702 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cbb109df-3f48-4e58-bd8f-70753cdbd68d" containerName="nova-api-log" containerID="cri-o://4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f" gracePeriod=30 Mar 13 11:16:36.426877 master-0 kubenswrapper[33013]: I0313 11:16:36.426854 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cbb109df-3f48-4e58-bd8f-70753cdbd68d" containerName="nova-api-api" containerID="cri-o://1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c" gracePeriod=30 Mar 13 11:16:36.450788 master-0 kubenswrapper[33013]: I0313 11:16:36.449677 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:16:36.450788 master-0 kubenswrapper[33013]: I0313 11:16:36.449961 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ecbe5862-3d02-4485-9892-a059eaa14438" containerName="nova-scheduler-scheduler" containerID="cri-o://a39a75325780317f3062acb728072ded2c61eb3abf704082ec976927b549442b" gracePeriod=30 Mar 13 11:16:36.508929 master-0 kubenswrapper[33013]: I0313 11:16:36.508866 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:16:36.509195 master-0 kubenswrapper[33013]: I0313 11:16:36.509154 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8757d7ed-ae03-4156-b659-9f3099567556" containerName="nova-metadata-log" containerID="cri-o://046d22b08bc96d67ea1f8b31db8e80f9b6f32425106f17b7a0076be50f0f704d" gracePeriod=30 Mar 13 11:16:36.510752 master-0 kubenswrapper[33013]: I0313 11:16:36.509328 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8757d7ed-ae03-4156-b659-9f3099567556" containerName="nova-metadata-metadata" containerID="cri-o://1face460b9c467022c0c2bd5c7183f38d69d450173ded840d340f0a729d90365" gracePeriod=30 Mar 13 11:16:36.737670 master-0 kubenswrapper[33013]: I0313 11:16:36.733688 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16b462d2-f716-400e-9ff5-51f843fbc2e9" path="/var/lib/kubelet/pods/16b462d2-f716-400e-9ff5-51f843fbc2e9/volumes" Mar 13 11:16:36.979613 master-0 kubenswrapper[33013]: I0313 11:16:36.979370 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 11:16:37.088078 master-0 kubenswrapper[33013]: I0313 11:16:37.088018 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-combined-ca-bundle\") pod \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " Mar 13 11:16:37.088311 master-0 kubenswrapper[33013]: I0313 11:16:37.088140 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbb109df-3f48-4e58-bd8f-70753cdbd68d-logs\") pod \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " Mar 13 11:16:37.088311 master-0 kubenswrapper[33013]: I0313 11:16:37.088235 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-public-tls-certs\") pod \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " Mar 13 11:16:37.088311 master-0 kubenswrapper[33013]: I0313 11:16:37.088279 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-config-data\") pod \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " Mar 13 11:16:37.088410 master-0 kubenswrapper[33013]: I0313 11:16:37.088349 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-internal-tls-certs\") pod \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " Mar 13 11:16:37.088410 master-0 kubenswrapper[33013]: I0313 11:16:37.088394 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g5b9\" (UniqueName: \"kubernetes.io/projected/cbb109df-3f48-4e58-bd8f-70753cdbd68d-kube-api-access-8g5b9\") pod \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\" (UID: \"cbb109df-3f48-4e58-bd8f-70753cdbd68d\") " Mar 13 11:16:37.088512 master-0 kubenswrapper[33013]: I0313 11:16:37.088467 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbb109df-3f48-4e58-bd8f-70753cdbd68d-logs" (OuterVolumeSpecName: "logs") pod "cbb109df-3f48-4e58-bd8f-70753cdbd68d" (UID: "cbb109df-3f48-4e58-bd8f-70753cdbd68d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:16:37.089030 master-0 kubenswrapper[33013]: I0313 11:16:37.089007 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbb109df-3f48-4e58-bd8f-70753cdbd68d-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:37.091982 master-0 kubenswrapper[33013]: I0313 11:16:37.091909 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbb109df-3f48-4e58-bd8f-70753cdbd68d-kube-api-access-8g5b9" (OuterVolumeSpecName: "kube-api-access-8g5b9") pod "cbb109df-3f48-4e58-bd8f-70753cdbd68d" (UID: "cbb109df-3f48-4e58-bd8f-70753cdbd68d"). InnerVolumeSpecName "kube-api-access-8g5b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:16:37.122086 master-0 kubenswrapper[33013]: I0313 11:16:37.121989 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbb109df-3f48-4e58-bd8f-70753cdbd68d" (UID: "cbb109df-3f48-4e58-bd8f-70753cdbd68d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:37.134499 master-0 kubenswrapper[33013]: I0313 11:16:37.134433 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-config-data" (OuterVolumeSpecName: "config-data") pod "cbb109df-3f48-4e58-bd8f-70753cdbd68d" (UID: "cbb109df-3f48-4e58-bd8f-70753cdbd68d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:37.147518 master-0 kubenswrapper[33013]: I0313 11:16:37.147457 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cbb109df-3f48-4e58-bd8f-70753cdbd68d" (UID: "cbb109df-3f48-4e58-bd8f-70753cdbd68d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:37.147924 master-0 kubenswrapper[33013]: I0313 11:16:37.147877 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cbb109df-3f48-4e58-bd8f-70753cdbd68d" (UID: "cbb109df-3f48-4e58-bd8f-70753cdbd68d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:37.191562 master-0 kubenswrapper[33013]: I0313 11:16:37.191384 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g5b9\" (UniqueName: \"kubernetes.io/projected/cbb109df-3f48-4e58-bd8f-70753cdbd68d-kube-api-access-8g5b9\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:37.191562 master-0 kubenswrapper[33013]: I0313 11:16:37.191429 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:37.191562 master-0 kubenswrapper[33013]: I0313 11:16:37.191443 33013 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:37.191562 master-0 kubenswrapper[33013]: I0313 11:16:37.191453 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:37.191562 master-0 kubenswrapper[33013]: I0313 11:16:37.191465 33013 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbb109df-3f48-4e58-bd8f-70753cdbd68d-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:37.289216 master-0 kubenswrapper[33013]: I0313 11:16:37.289157 33013 generic.go:334] "Generic (PLEG): container finished" podID="8757d7ed-ae03-4156-b659-9f3099567556" containerID="046d22b08bc96d67ea1f8b31db8e80f9b6f32425106f17b7a0076be50f0f704d" exitCode=143 Mar 13 11:16:37.289216 master-0 kubenswrapper[33013]: I0313 11:16:37.289222 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8757d7ed-ae03-4156-b659-9f3099567556","Type":"ContainerDied","Data":"046d22b08bc96d67ea1f8b31db8e80f9b6f32425106f17b7a0076be50f0f704d"} Mar 13 11:16:37.290934 master-0 kubenswrapper[33013]: I0313 11:16:37.290897 33013 generic.go:334] "Generic (PLEG): container finished" podID="cbb109df-3f48-4e58-bd8f-70753cdbd68d" containerID="1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c" exitCode=0 Mar 13 11:16:37.290934 master-0 kubenswrapper[33013]: I0313 11:16:37.290920 33013 generic.go:334] "Generic (PLEG): container finished" podID="cbb109df-3f48-4e58-bd8f-70753cdbd68d" containerID="4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f" exitCode=143 Mar 13 11:16:37.290934 master-0 kubenswrapper[33013]: I0313 11:16:37.290936 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbb109df-3f48-4e58-bd8f-70753cdbd68d","Type":"ContainerDied","Data":"1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c"} Mar 13 11:16:37.291070 master-0 kubenswrapper[33013]: I0313 11:16:37.290953 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbb109df-3f48-4e58-bd8f-70753cdbd68d","Type":"ContainerDied","Data":"4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f"} Mar 13 11:16:37.291070 master-0 kubenswrapper[33013]: I0313 11:16:37.290963 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbb109df-3f48-4e58-bd8f-70753cdbd68d","Type":"ContainerDied","Data":"92a38206875768d6071c51a5a9471462090bc4d47f85894257627dfba4030e41"} Mar 13 11:16:37.291070 master-0 kubenswrapper[33013]: I0313 11:16:37.290980 33013 scope.go:117] "RemoveContainer" containerID="1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c" Mar 13 11:16:37.291156 master-0 kubenswrapper[33013]: I0313 11:16:37.291085 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 11:16:37.328187 master-0 kubenswrapper[33013]: I0313 11:16:37.328139 33013 scope.go:117] "RemoveContainer" containerID="4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f" Mar 13 11:16:37.343352 master-0 kubenswrapper[33013]: I0313 11:16:37.342663 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:37.349967 master-0 kubenswrapper[33013]: I0313 11:16:37.349293 33013 scope.go:117] "RemoveContainer" containerID="1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c" Mar 13 11:16:37.349967 master-0 kubenswrapper[33013]: E0313 11:16:37.349817 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c\": container with ID starting with 1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c not found: ID does not exist" containerID="1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c" Mar 13 11:16:37.349967 master-0 kubenswrapper[33013]: I0313 11:16:37.349843 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c"} err="failed to get container status \"1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c\": rpc error: code = NotFound desc = could not find container \"1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c\": container with ID starting with 1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c not found: ID does not exist" Mar 13 11:16:37.349967 master-0 kubenswrapper[33013]: I0313 11:16:37.349865 33013 scope.go:117] "RemoveContainer" containerID="4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f" Mar 13 11:16:37.350229 master-0 kubenswrapper[33013]: E0313 11:16:37.350165 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f\": container with ID starting with 4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f not found: ID does not exist" containerID="4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f" Mar 13 11:16:37.350229 master-0 kubenswrapper[33013]: I0313 11:16:37.350183 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f"} err="failed to get container status \"4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f\": rpc error: code = NotFound desc = could not find container \"4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f\": container with ID starting with 4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f not found: ID does not exist" Mar 13 11:16:37.350229 master-0 kubenswrapper[33013]: I0313 11:16:37.350195 33013 scope.go:117] "RemoveContainer" containerID="1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c" Mar 13 11:16:37.350483 master-0 kubenswrapper[33013]: I0313 11:16:37.350457 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c"} err="failed to get container status \"1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c\": rpc error: code = NotFound desc = could not find container \"1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c\": container with ID starting with 1326b35814c73d4c986d7d5cb4ea452a0d7c098cda60b5c78dd0c5aca3d0711c not found: ID does not exist" Mar 13 11:16:37.350483 master-0 kubenswrapper[33013]: I0313 11:16:37.350478 33013 scope.go:117] "RemoveContainer" containerID="4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f" Mar 13 11:16:37.350708 master-0 kubenswrapper[33013]: I0313 11:16:37.350683 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f"} err="failed to get container status \"4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f\": rpc error: code = NotFound desc = could not find container \"4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f\": container with ID starting with 4f6a1273edddca23be12a8f2c0d0ea9d41bacbd6ebac3707e02d5140d163541f not found: ID does not exist" Mar 13 11:16:37.355908 master-0 kubenswrapper[33013]: I0313 11:16:37.355826 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:37.383451 master-0 kubenswrapper[33013]: I0313 11:16:37.383363 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:37.384579 master-0 kubenswrapper[33013]: E0313 11:16:37.384537 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b462d2-f716-400e-9ff5-51f843fbc2e9" containerName="init" Mar 13 11:16:37.384692 master-0 kubenswrapper[33013]: I0313 11:16:37.384658 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b462d2-f716-400e-9ff5-51f843fbc2e9" containerName="init" Mar 13 11:16:37.384692 master-0 kubenswrapper[33013]: E0313 11:16:37.384684 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb109df-3f48-4e58-bd8f-70753cdbd68d" containerName="nova-api-log" Mar 13 11:16:37.384692 master-0 kubenswrapper[33013]: I0313 11:16:37.384692 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb109df-3f48-4e58-bd8f-70753cdbd68d" containerName="nova-api-log" Mar 13 11:16:37.384925 master-0 kubenswrapper[33013]: E0313 11:16:37.384732 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b462d2-f716-400e-9ff5-51f843fbc2e9" containerName="dnsmasq-dns" Mar 13 11:16:37.384925 master-0 kubenswrapper[33013]: I0313 11:16:37.384910 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b462d2-f716-400e-9ff5-51f843fbc2e9" containerName="dnsmasq-dns" Mar 13 11:16:37.385018 master-0 kubenswrapper[33013]: E0313 11:16:37.384931 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb109df-3f48-4e58-bd8f-70753cdbd68d" containerName="nova-api-api" Mar 13 11:16:37.385018 master-0 kubenswrapper[33013]: I0313 11:16:37.384938 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb109df-3f48-4e58-bd8f-70753cdbd68d" containerName="nova-api-api" Mar 13 11:16:37.385018 master-0 kubenswrapper[33013]: E0313 11:16:37.384965 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f36fdace-b3b6-4c56-a39b-db6246d57dda" containerName="nova-manage" Mar 13 11:16:37.385018 master-0 kubenswrapper[33013]: I0313 11:16:37.384973 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="f36fdace-b3b6-4c56-a39b-db6246d57dda" containerName="nova-manage" Mar 13 11:16:37.385018 master-0 kubenswrapper[33013]: E0313 11:16:37.384985 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a2d04fd-e054-421f-862a-d01159c5c3a2" containerName="nova-manage" Mar 13 11:16:37.385018 master-0 kubenswrapper[33013]: I0313 11:16:37.384991 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a2d04fd-e054-421f-862a-d01159c5c3a2" containerName="nova-manage" Mar 13 11:16:37.385573 master-0 kubenswrapper[33013]: I0313 11:16:37.385541 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbb109df-3f48-4e58-bd8f-70753cdbd68d" containerName="nova-api-log" Mar 13 11:16:37.385573 master-0 kubenswrapper[33013]: I0313 11:16:37.385566 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a2d04fd-e054-421f-862a-d01159c5c3a2" containerName="nova-manage" Mar 13 11:16:37.385707 master-0 kubenswrapper[33013]: I0313 11:16:37.385614 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b462d2-f716-400e-9ff5-51f843fbc2e9" containerName="dnsmasq-dns" Mar 13 11:16:37.385707 master-0 kubenswrapper[33013]: I0313 11:16:37.385637 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="f36fdace-b3b6-4c56-a39b-db6246d57dda" containerName="nova-manage" Mar 13 11:16:37.385707 master-0 kubenswrapper[33013]: I0313 11:16:37.385649 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbb109df-3f48-4e58-bd8f-70753cdbd68d" containerName="nova-api-api" Mar 13 11:16:37.387958 master-0 kubenswrapper[33013]: I0313 11:16:37.387383 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 11:16:37.391202 master-0 kubenswrapper[33013]: I0313 11:16:37.389924 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 13 11:16:37.391202 master-0 kubenswrapper[33013]: I0313 11:16:37.390395 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 13 11:16:37.392875 master-0 kubenswrapper[33013]: I0313 11:16:37.391426 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 13 11:16:37.456364 master-0 kubenswrapper[33013]: I0313 11:16:37.456306 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:37.501599 master-0 kubenswrapper[33013]: I0313 11:16:37.501521 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6fda02-e35b-497d-8eaa-299ab2633667-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.501856 master-0 kubenswrapper[33013]: I0313 11:16:37.501632 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6fda02-e35b-497d-8eaa-299ab2633667-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.501856 master-0 kubenswrapper[33013]: I0313 11:16:37.501820 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6fda02-e35b-497d-8eaa-299ab2633667-config-data\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.501946 master-0 kubenswrapper[33013]: I0313 11:16:37.501887 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbr6x\" (UniqueName: \"kubernetes.io/projected/4b6fda02-e35b-497d-8eaa-299ab2633667-kube-api-access-mbr6x\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.502194 master-0 kubenswrapper[33013]: I0313 11:16:37.502159 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b6fda02-e35b-497d-8eaa-299ab2633667-logs\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.502572 master-0 kubenswrapper[33013]: I0313 11:16:37.502390 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6fda02-e35b-497d-8eaa-299ab2633667-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.605448 master-0 kubenswrapper[33013]: I0313 11:16:37.605370 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6fda02-e35b-497d-8eaa-299ab2633667-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.605726 master-0 kubenswrapper[33013]: I0313 11:16:37.605569 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6fda02-e35b-497d-8eaa-299ab2633667-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.605726 master-0 kubenswrapper[33013]: I0313 11:16:37.605696 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6fda02-e35b-497d-8eaa-299ab2633667-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.606281 master-0 kubenswrapper[33013]: I0313 11:16:37.606254 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6fda02-e35b-497d-8eaa-299ab2633667-config-data\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.606352 master-0 kubenswrapper[33013]: I0313 11:16:37.606300 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbr6x\" (UniqueName: \"kubernetes.io/projected/4b6fda02-e35b-497d-8eaa-299ab2633667-kube-api-access-mbr6x\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.606617 master-0 kubenswrapper[33013]: I0313 11:16:37.606534 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b6fda02-e35b-497d-8eaa-299ab2633667-logs\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.607171 master-0 kubenswrapper[33013]: I0313 11:16:37.607144 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b6fda02-e35b-497d-8eaa-299ab2633667-logs\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.609677 master-0 kubenswrapper[33013]: I0313 11:16:37.609644 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6fda02-e35b-497d-8eaa-299ab2633667-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.610149 master-0 kubenswrapper[33013]: I0313 11:16:37.610117 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6fda02-e35b-497d-8eaa-299ab2633667-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.613737 master-0 kubenswrapper[33013]: I0313 11:16:37.613266 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6fda02-e35b-497d-8eaa-299ab2633667-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.615371 master-0 kubenswrapper[33013]: I0313 11:16:37.614168 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6fda02-e35b-497d-8eaa-299ab2633667-config-data\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.631726 master-0 kubenswrapper[33013]: I0313 11:16:37.630800 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbr6x\" (UniqueName: \"kubernetes.io/projected/4b6fda02-e35b-497d-8eaa-299ab2633667-kube-api-access-mbr6x\") pod \"nova-api-0\" (UID: \"4b6fda02-e35b-497d-8eaa-299ab2633667\") " pod="openstack/nova-api-0" Mar 13 11:16:37.759644 master-0 kubenswrapper[33013]: I0313 11:16:37.759459 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 13 11:16:38.098148 master-0 kubenswrapper[33013]: E0313 11:16:38.098084 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a39a75325780317f3062acb728072ded2c61eb3abf704082ec976927b549442b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 11:16:38.099842 master-0 kubenswrapper[33013]: E0313 11:16:38.099793 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a39a75325780317f3062acb728072ded2c61eb3abf704082ec976927b549442b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 11:16:38.102181 master-0 kubenswrapper[33013]: E0313 11:16:38.102104 33013 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a39a75325780317f3062acb728072ded2c61eb3abf704082ec976927b549442b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 13 11:16:38.102252 master-0 kubenswrapper[33013]: E0313 11:16:38.102227 33013 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ecbe5862-3d02-4485-9892-a059eaa14438" containerName="nova-scheduler-scheduler" Mar 13 11:16:38.239544 master-0 kubenswrapper[33013]: I0313 11:16:38.234434 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 13 11:16:38.304102 master-0 kubenswrapper[33013]: I0313 11:16:38.304048 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b6fda02-e35b-497d-8eaa-299ab2633667","Type":"ContainerStarted","Data":"76ff3fdfe5ea07ec09d26564dbda3b4a3a36f244d8b6fe772127fe0b022a0dc8"} Mar 13 11:16:38.736839 master-0 kubenswrapper[33013]: I0313 11:16:38.736757 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbb109df-3f48-4e58-bd8f-70753cdbd68d" path="/var/lib/kubelet/pods/cbb109df-3f48-4e58-bd8f-70753cdbd68d/volumes" Mar 13 11:16:39.321698 master-0 kubenswrapper[33013]: I0313 11:16:39.321640 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b6fda02-e35b-497d-8eaa-299ab2633667","Type":"ContainerStarted","Data":"466490aac894ebf93b07d3ee61c1d97eba367c536ae65d0e89239b4d3ba7be0e"} Mar 13 11:16:39.321698 master-0 kubenswrapper[33013]: I0313 11:16:39.321698 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b6fda02-e35b-497d-8eaa-299ab2633667","Type":"ContainerStarted","Data":"cb30697722611d943ff2e88d39641ceacdec8b08566b35fabc48bb295f4b5f2a"} Mar 13 11:16:39.352252 master-0 kubenswrapper[33013]: I0313 11:16:39.352173 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.352153408 podStartE2EDuration="2.352153408s" podCreationTimestamp="2026-03-13 11:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:16:39.348676939 +0000 UTC m=+1182.824630288" watchObservedRunningTime="2026-03-13 11:16:39.352153408 +0000 UTC m=+1182.828106747" Mar 13 11:16:40.152931 master-0 kubenswrapper[33013]: I0313 11:16:40.152812 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 11:16:40.314471 master-0 kubenswrapper[33013]: I0313 11:16:40.314303 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-combined-ca-bundle\") pod \"8757d7ed-ae03-4156-b659-9f3099567556\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " Mar 13 11:16:40.314740 master-0 kubenswrapper[33013]: I0313 11:16:40.314528 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-config-data\") pod \"8757d7ed-ae03-4156-b659-9f3099567556\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " Mar 13 11:16:40.314740 master-0 kubenswrapper[33013]: I0313 11:16:40.314579 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-nova-metadata-tls-certs\") pod \"8757d7ed-ae03-4156-b659-9f3099567556\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " Mar 13 11:16:40.314740 master-0 kubenswrapper[33013]: I0313 11:16:40.314633 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8757d7ed-ae03-4156-b659-9f3099567556-logs\") pod \"8757d7ed-ae03-4156-b659-9f3099567556\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " Mar 13 11:16:40.314740 master-0 kubenswrapper[33013]: I0313 11:16:40.314728 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8tdc\" (UniqueName: \"kubernetes.io/projected/8757d7ed-ae03-4156-b659-9f3099567556-kube-api-access-w8tdc\") pod \"8757d7ed-ae03-4156-b659-9f3099567556\" (UID: \"8757d7ed-ae03-4156-b659-9f3099567556\") " Mar 13 11:16:40.316237 master-0 kubenswrapper[33013]: I0313 11:16:40.316180 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8757d7ed-ae03-4156-b659-9f3099567556-logs" (OuterVolumeSpecName: "logs") pod "8757d7ed-ae03-4156-b659-9f3099567556" (UID: "8757d7ed-ae03-4156-b659-9f3099567556"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 11:16:40.320796 master-0 kubenswrapper[33013]: I0313 11:16:40.320739 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8757d7ed-ae03-4156-b659-9f3099567556-kube-api-access-w8tdc" (OuterVolumeSpecName: "kube-api-access-w8tdc") pod "8757d7ed-ae03-4156-b659-9f3099567556" (UID: "8757d7ed-ae03-4156-b659-9f3099567556"). InnerVolumeSpecName "kube-api-access-w8tdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:16:40.344574 master-0 kubenswrapper[33013]: I0313 11:16:40.344506 33013 generic.go:334] "Generic (PLEG): container finished" podID="8757d7ed-ae03-4156-b659-9f3099567556" containerID="1face460b9c467022c0c2bd5c7183f38d69d450173ded840d340f0a729d90365" exitCode=0 Mar 13 11:16:40.344574 master-0 kubenswrapper[33013]: I0313 11:16:40.344556 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8757d7ed-ae03-4156-b659-9f3099567556","Type":"ContainerDied","Data":"1face460b9c467022c0c2bd5c7183f38d69d450173ded840d340f0a729d90365"} Mar 13 11:16:40.345198 master-0 kubenswrapper[33013]: I0313 11:16:40.344615 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8757d7ed-ae03-4156-b659-9f3099567556","Type":"ContainerDied","Data":"a9345ebc5f0f5fadec11c1f7fd7ca3ee527403498c26e305a874a5f19ebf4ffe"} Mar 13 11:16:40.345198 master-0 kubenswrapper[33013]: I0313 11:16:40.344540 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 11:16:40.345198 master-0 kubenswrapper[33013]: I0313 11:16:40.344634 33013 scope.go:117] "RemoveContainer" containerID="1face460b9c467022c0c2bd5c7183f38d69d450173ded840d340f0a729d90365" Mar 13 11:16:40.362972 master-0 kubenswrapper[33013]: I0313 11:16:40.362909 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-config-data" (OuterVolumeSpecName: "config-data") pod "8757d7ed-ae03-4156-b659-9f3099567556" (UID: "8757d7ed-ae03-4156-b659-9f3099567556"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:40.366746 master-0 kubenswrapper[33013]: I0313 11:16:40.366685 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8757d7ed-ae03-4156-b659-9f3099567556" (UID: "8757d7ed-ae03-4156-b659-9f3099567556"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:40.382476 master-0 kubenswrapper[33013]: I0313 11:16:40.382394 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8757d7ed-ae03-4156-b659-9f3099567556" (UID: "8757d7ed-ae03-4156-b659-9f3099567556"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:40.419001 master-0 kubenswrapper[33013]: I0313 11:16:40.418947 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:40.419001 master-0 kubenswrapper[33013]: I0313 11:16:40.418994 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:40.419402 master-0 kubenswrapper[33013]: I0313 11:16:40.419006 33013 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8757d7ed-ae03-4156-b659-9f3099567556-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:40.419402 master-0 kubenswrapper[33013]: I0313 11:16:40.419019 33013 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8757d7ed-ae03-4156-b659-9f3099567556-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:40.419402 master-0 kubenswrapper[33013]: I0313 11:16:40.419028 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8tdc\" (UniqueName: \"kubernetes.io/projected/8757d7ed-ae03-4156-b659-9f3099567556-kube-api-access-w8tdc\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:40.452145 master-0 kubenswrapper[33013]: I0313 11:16:40.452103 33013 scope.go:117] "RemoveContainer" containerID="046d22b08bc96d67ea1f8b31db8e80f9b6f32425106f17b7a0076be50f0f704d" Mar 13 11:16:40.481402 master-0 kubenswrapper[33013]: I0313 11:16:40.481354 33013 scope.go:117] "RemoveContainer" containerID="1face460b9c467022c0c2bd5c7183f38d69d450173ded840d340f0a729d90365" Mar 13 11:16:40.481920 master-0 kubenswrapper[33013]: E0313 11:16:40.481868 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1face460b9c467022c0c2bd5c7183f38d69d450173ded840d340f0a729d90365\": container with ID starting with 1face460b9c467022c0c2bd5c7183f38d69d450173ded840d340f0a729d90365 not found: ID does not exist" containerID="1face460b9c467022c0c2bd5c7183f38d69d450173ded840d340f0a729d90365" Mar 13 11:16:40.481962 master-0 kubenswrapper[33013]: I0313 11:16:40.481926 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1face460b9c467022c0c2bd5c7183f38d69d450173ded840d340f0a729d90365"} err="failed to get container status \"1face460b9c467022c0c2bd5c7183f38d69d450173ded840d340f0a729d90365\": rpc error: code = NotFound desc = could not find container \"1face460b9c467022c0c2bd5c7183f38d69d450173ded840d340f0a729d90365\": container with ID starting with 1face460b9c467022c0c2bd5c7183f38d69d450173ded840d340f0a729d90365 not found: ID does not exist" Mar 13 11:16:40.481962 master-0 kubenswrapper[33013]: I0313 11:16:40.481954 33013 scope.go:117] "RemoveContainer" containerID="046d22b08bc96d67ea1f8b31db8e80f9b6f32425106f17b7a0076be50f0f704d" Mar 13 11:16:40.482316 master-0 kubenswrapper[33013]: E0313 11:16:40.482272 33013 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"046d22b08bc96d67ea1f8b31db8e80f9b6f32425106f17b7a0076be50f0f704d\": container with ID starting with 046d22b08bc96d67ea1f8b31db8e80f9b6f32425106f17b7a0076be50f0f704d not found: ID does not exist" containerID="046d22b08bc96d67ea1f8b31db8e80f9b6f32425106f17b7a0076be50f0f704d" Mar 13 11:16:40.482379 master-0 kubenswrapper[33013]: I0313 11:16:40.482309 33013 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"046d22b08bc96d67ea1f8b31db8e80f9b6f32425106f17b7a0076be50f0f704d"} err="failed to get container status \"046d22b08bc96d67ea1f8b31db8e80f9b6f32425106f17b7a0076be50f0f704d\": rpc error: code = NotFound desc = could not find container \"046d22b08bc96d67ea1f8b31db8e80f9b6f32425106f17b7a0076be50f0f704d\": container with ID starting with 046d22b08bc96d67ea1f8b31db8e80f9b6f32425106f17b7a0076be50f0f704d not found: ID does not exist" Mar 13 11:16:40.748894 master-0 kubenswrapper[33013]: I0313 11:16:40.748838 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:16:40.781633 master-0 kubenswrapper[33013]: I0313 11:16:40.774666 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:16:40.786422 master-0 kubenswrapper[33013]: I0313 11:16:40.786328 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:16:40.790624 master-0 kubenswrapper[33013]: E0313 11:16:40.786920 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8757d7ed-ae03-4156-b659-9f3099567556" containerName="nova-metadata-log" Mar 13 11:16:40.790624 master-0 kubenswrapper[33013]: I0313 11:16:40.786941 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="8757d7ed-ae03-4156-b659-9f3099567556" containerName="nova-metadata-log" Mar 13 11:16:40.790624 master-0 kubenswrapper[33013]: E0313 11:16:40.786985 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8757d7ed-ae03-4156-b659-9f3099567556" containerName="nova-metadata-metadata" Mar 13 11:16:40.790624 master-0 kubenswrapper[33013]: I0313 11:16:40.786992 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="8757d7ed-ae03-4156-b659-9f3099567556" containerName="nova-metadata-metadata" Mar 13 11:16:40.790624 master-0 kubenswrapper[33013]: I0313 11:16:40.787288 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="8757d7ed-ae03-4156-b659-9f3099567556" containerName="nova-metadata-log" Mar 13 11:16:40.790624 master-0 kubenswrapper[33013]: I0313 11:16:40.787343 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="8757d7ed-ae03-4156-b659-9f3099567556" containerName="nova-metadata-metadata" Mar 13 11:16:40.790624 master-0 kubenswrapper[33013]: I0313 11:16:40.788668 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 11:16:40.793107 master-0 kubenswrapper[33013]: I0313 11:16:40.791823 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 13 11:16:40.810004 master-0 kubenswrapper[33013]: I0313 11:16:40.795928 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 13 11:16:40.822023 master-0 kubenswrapper[33013]: I0313 11:16:40.821960 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:16:40.932969 master-0 kubenswrapper[33013]: I0313 11:16:40.932791 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/579ae67a-8961-4a49-b6f9-c85a20f56222-logs\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:40.933203 master-0 kubenswrapper[33013]: I0313 11:16:40.932971 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579ae67a-8961-4a49-b6f9-c85a20f56222-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:40.933636 master-0 kubenswrapper[33013]: I0313 11:16:40.933563 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86v27\" (UniqueName: \"kubernetes.io/projected/579ae67a-8961-4a49-b6f9-c85a20f56222-kube-api-access-86v27\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:40.933806 master-0 kubenswrapper[33013]: I0313 11:16:40.933775 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/579ae67a-8961-4a49-b6f9-c85a20f56222-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:40.933866 master-0 kubenswrapper[33013]: I0313 11:16:40.933814 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579ae67a-8961-4a49-b6f9-c85a20f56222-config-data\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:41.036485 master-0 kubenswrapper[33013]: I0313 11:16:41.036444 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86v27\" (UniqueName: \"kubernetes.io/projected/579ae67a-8961-4a49-b6f9-c85a20f56222-kube-api-access-86v27\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:41.036841 master-0 kubenswrapper[33013]: I0313 11:16:41.036821 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/579ae67a-8961-4a49-b6f9-c85a20f56222-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:41.036933 master-0 kubenswrapper[33013]: I0313 11:16:41.036919 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579ae67a-8961-4a49-b6f9-c85a20f56222-config-data\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:41.037024 master-0 kubenswrapper[33013]: I0313 11:16:41.037011 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/579ae67a-8961-4a49-b6f9-c85a20f56222-logs\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:41.037128 master-0 kubenswrapper[33013]: I0313 11:16:41.037115 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579ae67a-8961-4a49-b6f9-c85a20f56222-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:41.037607 master-0 kubenswrapper[33013]: I0313 11:16:41.037523 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/579ae67a-8961-4a49-b6f9-c85a20f56222-logs\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:41.040528 master-0 kubenswrapper[33013]: I0313 11:16:41.040509 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/579ae67a-8961-4a49-b6f9-c85a20f56222-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:41.041835 master-0 kubenswrapper[33013]: I0313 11:16:41.041793 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579ae67a-8961-4a49-b6f9-c85a20f56222-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:41.041923 master-0 kubenswrapper[33013]: I0313 11:16:41.041875 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579ae67a-8961-4a49-b6f9-c85a20f56222-config-data\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:41.055051 master-0 kubenswrapper[33013]: I0313 11:16:41.054989 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86v27\" (UniqueName: \"kubernetes.io/projected/579ae67a-8961-4a49-b6f9-c85a20f56222-kube-api-access-86v27\") pod \"nova-metadata-0\" (UID: \"579ae67a-8961-4a49-b6f9-c85a20f56222\") " pod="openstack/nova-metadata-0" Mar 13 11:16:41.128041 master-0 kubenswrapper[33013]: I0313 11:16:41.127982 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 13 11:16:41.638185 master-0 kubenswrapper[33013]: I0313 11:16:41.637040 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 13 11:16:42.382486 master-0 kubenswrapper[33013]: I0313 11:16:42.382420 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"579ae67a-8961-4a49-b6f9-c85a20f56222","Type":"ContainerStarted","Data":"8d82085b8276a91de21a6c4710a36b09ca3419114227e08b6a73c668a08420b9"} Mar 13 11:16:42.382486 master-0 kubenswrapper[33013]: I0313 11:16:42.382476 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"579ae67a-8961-4a49-b6f9-c85a20f56222","Type":"ContainerStarted","Data":"098a4842319a85c84cea2e0fc39cf1682bf8c9171d5f24d8bc31b977ae72b9e4"} Mar 13 11:16:42.382486 master-0 kubenswrapper[33013]: I0313 11:16:42.382491 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"579ae67a-8961-4a49-b6f9-c85a20f56222","Type":"ContainerStarted","Data":"34c0f4cbff1e7f1227c0e93718434ff00dc51f3406dcd7276d5f62f6b75351f7"} Mar 13 11:16:42.385692 master-0 kubenswrapper[33013]: I0313 11:16:42.385634 33013 generic.go:334] "Generic (PLEG): container finished" podID="ecbe5862-3d02-4485-9892-a059eaa14438" containerID="a39a75325780317f3062acb728072ded2c61eb3abf704082ec976927b549442b" exitCode=0 Mar 13 11:16:42.385809 master-0 kubenswrapper[33013]: I0313 11:16:42.385675 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ecbe5862-3d02-4485-9892-a059eaa14438","Type":"ContainerDied","Data":"a39a75325780317f3062acb728072ded2c61eb3abf704082ec976927b549442b"} Mar 13 11:16:42.423785 master-0 kubenswrapper[33013]: I0313 11:16:42.423401 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.423370178 podStartE2EDuration="2.423370178s" podCreationTimestamp="2026-03-13 11:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:16:42.410333115 +0000 UTC m=+1185.886286474" watchObservedRunningTime="2026-03-13 11:16:42.423370178 +0000 UTC m=+1185.899323527" Mar 13 11:16:42.453015 master-0 kubenswrapper[33013]: I0313 11:16:42.452963 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 11:16:42.579260 master-0 kubenswrapper[33013]: I0313 11:16:42.579190 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecbe5862-3d02-4485-9892-a059eaa14438-config-data\") pod \"ecbe5862-3d02-4485-9892-a059eaa14438\" (UID: \"ecbe5862-3d02-4485-9892-a059eaa14438\") " Mar 13 11:16:42.579699 master-0 kubenswrapper[33013]: I0313 11:16:42.579665 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bvjx\" (UniqueName: \"kubernetes.io/projected/ecbe5862-3d02-4485-9892-a059eaa14438-kube-api-access-8bvjx\") pod \"ecbe5862-3d02-4485-9892-a059eaa14438\" (UID: \"ecbe5862-3d02-4485-9892-a059eaa14438\") " Mar 13 11:16:42.579919 master-0 kubenswrapper[33013]: I0313 11:16:42.579889 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecbe5862-3d02-4485-9892-a059eaa14438-combined-ca-bundle\") pod \"ecbe5862-3d02-4485-9892-a059eaa14438\" (UID: \"ecbe5862-3d02-4485-9892-a059eaa14438\") " Mar 13 11:16:42.596680 master-0 kubenswrapper[33013]: I0313 11:16:42.596553 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbe5862-3d02-4485-9892-a059eaa14438-kube-api-access-8bvjx" (OuterVolumeSpecName: "kube-api-access-8bvjx") pod "ecbe5862-3d02-4485-9892-a059eaa14438" (UID: "ecbe5862-3d02-4485-9892-a059eaa14438"). InnerVolumeSpecName "kube-api-access-8bvjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:16:42.627424 master-0 kubenswrapper[33013]: I0313 11:16:42.627370 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbe5862-3d02-4485-9892-a059eaa14438-config-data" (OuterVolumeSpecName: "config-data") pod "ecbe5862-3d02-4485-9892-a059eaa14438" (UID: "ecbe5862-3d02-4485-9892-a059eaa14438"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:42.638160 master-0 kubenswrapper[33013]: I0313 11:16:42.638109 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbe5862-3d02-4485-9892-a059eaa14438-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecbe5862-3d02-4485-9892-a059eaa14438" (UID: "ecbe5862-3d02-4485-9892-a059eaa14438"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:16:42.685148 master-0 kubenswrapper[33013]: I0313 11:16:42.684990 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bvjx\" (UniqueName: \"kubernetes.io/projected/ecbe5862-3d02-4485-9892-a059eaa14438-kube-api-access-8bvjx\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:42.685148 master-0 kubenswrapper[33013]: I0313 11:16:42.685052 33013 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecbe5862-3d02-4485-9892-a059eaa14438-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:42.685148 master-0 kubenswrapper[33013]: I0313 11:16:42.685067 33013 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecbe5862-3d02-4485-9892-a059eaa14438-config-data\") on node \"master-0\" DevicePath \"\"" Mar 13 11:16:42.729804 master-0 kubenswrapper[33013]: I0313 11:16:42.729456 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8757d7ed-ae03-4156-b659-9f3099567556" path="/var/lib/kubelet/pods/8757d7ed-ae03-4156-b659-9f3099567556/volumes" Mar 13 11:16:43.401258 master-0 kubenswrapper[33013]: I0313 11:16:43.401195 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ecbe5862-3d02-4485-9892-a059eaa14438","Type":"ContainerDied","Data":"7372f40b4c25026cd813eb21177d53d3c087d1bb32742773c686da8c74fb1839"} Mar 13 11:16:43.401536 master-0 kubenswrapper[33013]: I0313 11:16:43.401273 33013 scope.go:117] "RemoveContainer" containerID="a39a75325780317f3062acb728072ded2c61eb3abf704082ec976927b549442b" Mar 13 11:16:43.401744 master-0 kubenswrapper[33013]: I0313 11:16:43.401706 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 11:16:43.434400 master-0 kubenswrapper[33013]: I0313 11:16:43.434327 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:16:43.454882 master-0 kubenswrapper[33013]: I0313 11:16:43.454841 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:16:43.471136 master-0 kubenswrapper[33013]: I0313 11:16:43.471073 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:16:43.472027 master-0 kubenswrapper[33013]: E0313 11:16:43.471982 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbe5862-3d02-4485-9892-a059eaa14438" containerName="nova-scheduler-scheduler" Mar 13 11:16:43.472027 master-0 kubenswrapper[33013]: I0313 11:16:43.472020 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbe5862-3d02-4485-9892-a059eaa14438" containerName="nova-scheduler-scheduler" Mar 13 11:16:43.472403 master-0 kubenswrapper[33013]: I0313 11:16:43.472373 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbe5862-3d02-4485-9892-a059eaa14438" containerName="nova-scheduler-scheduler" Mar 13 11:16:43.473612 master-0 kubenswrapper[33013]: I0313 11:16:43.473570 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 11:16:43.482108 master-0 kubenswrapper[33013]: I0313 11:16:43.482042 33013 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 13 11:16:43.485311 master-0 kubenswrapper[33013]: I0313 11:16:43.484469 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:16:43.612273 master-0 kubenswrapper[33013]: I0313 11:16:43.612197 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f8dca0f-51a8-4420-a506-d12fb6c0c7f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f8dca0f-51a8-4420-a506-d12fb6c0c7f4\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:43.612273 master-0 kubenswrapper[33013]: I0313 11:16:43.612268 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnk4f\" (UniqueName: \"kubernetes.io/projected/4f8dca0f-51a8-4420-a506-d12fb6c0c7f4-kube-api-access-qnk4f\") pod \"nova-scheduler-0\" (UID: \"4f8dca0f-51a8-4420-a506-d12fb6c0c7f4\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:43.612651 master-0 kubenswrapper[33013]: I0313 11:16:43.612416 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f8dca0f-51a8-4420-a506-d12fb6c0c7f4-config-data\") pod \"nova-scheduler-0\" (UID: \"4f8dca0f-51a8-4420-a506-d12fb6c0c7f4\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:43.715052 master-0 kubenswrapper[33013]: I0313 11:16:43.714898 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f8dca0f-51a8-4420-a506-d12fb6c0c7f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f8dca0f-51a8-4420-a506-d12fb6c0c7f4\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:43.715052 master-0 kubenswrapper[33013]: I0313 11:16:43.714950 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnk4f\" (UniqueName: \"kubernetes.io/projected/4f8dca0f-51a8-4420-a506-d12fb6c0c7f4-kube-api-access-qnk4f\") pod \"nova-scheduler-0\" (UID: \"4f8dca0f-51a8-4420-a506-d12fb6c0c7f4\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:43.715052 master-0 kubenswrapper[33013]: I0313 11:16:43.714980 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f8dca0f-51a8-4420-a506-d12fb6c0c7f4-config-data\") pod \"nova-scheduler-0\" (UID: \"4f8dca0f-51a8-4420-a506-d12fb6c0c7f4\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:43.719700 master-0 kubenswrapper[33013]: I0313 11:16:43.719620 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f8dca0f-51a8-4420-a506-d12fb6c0c7f4-config-data\") pod \"nova-scheduler-0\" (UID: \"4f8dca0f-51a8-4420-a506-d12fb6c0c7f4\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:43.720456 master-0 kubenswrapper[33013]: I0313 11:16:43.720429 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f8dca0f-51a8-4420-a506-d12fb6c0c7f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f8dca0f-51a8-4420-a506-d12fb6c0c7f4\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:43.736825 master-0 kubenswrapper[33013]: I0313 11:16:43.736765 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnk4f\" (UniqueName: \"kubernetes.io/projected/4f8dca0f-51a8-4420-a506-d12fb6c0c7f4-kube-api-access-qnk4f\") pod \"nova-scheduler-0\" (UID: \"4f8dca0f-51a8-4420-a506-d12fb6c0c7f4\") " pod="openstack/nova-scheduler-0" Mar 13 11:16:43.817964 master-0 kubenswrapper[33013]: I0313 11:16:43.817905 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 13 11:16:44.316407 master-0 kubenswrapper[33013]: I0313 11:16:44.316346 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 13 11:16:44.322863 master-0 kubenswrapper[33013]: W0313 11:16:44.322786 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f8dca0f_51a8_4420_a506_d12fb6c0c7f4.slice/crio-884d2ee062e23d73d5f324fa4971822e68e289330917bc71b87a6b3163889eb2 WatchSource:0}: Error finding container 884d2ee062e23d73d5f324fa4971822e68e289330917bc71b87a6b3163889eb2: Status 404 returned error can't find the container with id 884d2ee062e23d73d5f324fa4971822e68e289330917bc71b87a6b3163889eb2 Mar 13 11:16:44.425665 master-0 kubenswrapper[33013]: I0313 11:16:44.425491 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f8dca0f-51a8-4420-a506-d12fb6c0c7f4","Type":"ContainerStarted","Data":"884d2ee062e23d73d5f324fa4971822e68e289330917bc71b87a6b3163889eb2"} Mar 13 11:16:44.741458 master-0 kubenswrapper[33013]: I0313 11:16:44.741290 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecbe5862-3d02-4485-9892-a059eaa14438" path="/var/lib/kubelet/pods/ecbe5862-3d02-4485-9892-a059eaa14438/volumes" Mar 13 11:16:45.451369 master-0 kubenswrapper[33013]: I0313 11:16:45.451285 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f8dca0f-51a8-4420-a506-d12fb6c0c7f4","Type":"ContainerStarted","Data":"58af99a75afa7e349ba0b0c04d2d70aa755e3f6158e0c7451e795539e0f3ba6a"} Mar 13 11:16:45.501623 master-0 kubenswrapper[33013]: I0313 11:16:45.501081 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.501057513 podStartE2EDuration="2.501057513s" podCreationTimestamp="2026-03-13 11:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:16:45.492152728 +0000 UTC m=+1188.968106077" watchObservedRunningTime="2026-03-13 11:16:45.501057513 +0000 UTC m=+1188.977010882" Mar 13 11:16:46.128972 master-0 kubenswrapper[33013]: I0313 11:16:46.128866 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 11:16:46.128972 master-0 kubenswrapper[33013]: I0313 11:16:46.128961 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 13 11:16:47.760206 master-0 kubenswrapper[33013]: I0313 11:16:47.760139 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 11:16:47.760206 master-0 kubenswrapper[33013]: I0313 11:16:47.760199 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 13 11:16:48.772843 master-0 kubenswrapper[33013]: I0313 11:16:48.772761 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b6fda02-e35b-497d-8eaa-299ab2633667" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.18:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:16:48.772843 master-0 kubenswrapper[33013]: I0313 11:16:48.772793 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b6fda02-e35b-497d-8eaa-299ab2633667" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.18:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:16:48.818630 master-0 kubenswrapper[33013]: I0313 11:16:48.818553 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 13 11:16:51.129246 master-0 kubenswrapper[33013]: I0313 11:16:51.128706 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 11:16:51.129246 master-0 kubenswrapper[33013]: I0313 11:16:51.129243 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 13 11:16:52.141930 master-0 kubenswrapper[33013]: I0313 11:16:52.141828 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="579ae67a-8961-4a49-b6f9-c85a20f56222" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.19:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:16:52.142746 master-0 kubenswrapper[33013]: I0313 11:16:52.141827 33013 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="579ae67a-8961-4a49-b6f9-c85a20f56222" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.19:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 11:16:53.818117 master-0 kubenswrapper[33013]: I0313 11:16:53.818063 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 13 11:16:53.853178 master-0 kubenswrapper[33013]: I0313 11:16:53.853105 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 13 11:16:54.595811 master-0 kubenswrapper[33013]: I0313 11:16:54.595747 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 13 11:16:57.768784 master-0 kubenswrapper[33013]: I0313 11:16:57.768709 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 11:16:57.769533 master-0 kubenswrapper[33013]: I0313 11:16:57.769295 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 11:16:57.774838 master-0 kubenswrapper[33013]: I0313 11:16:57.774789 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 13 11:16:57.783131 master-0 kubenswrapper[33013]: I0313 11:16:57.783060 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 11:16:58.606809 master-0 kubenswrapper[33013]: I0313 11:16:58.606764 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 13 11:16:58.613445 master-0 kubenswrapper[33013]: I0313 11:16:58.613378 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 13 11:17:01.134928 master-0 kubenswrapper[33013]: I0313 11:17:01.134857 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 11:17:01.135682 master-0 kubenswrapper[33013]: I0313 11:17:01.135643 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 13 11:17:01.142546 master-0 kubenswrapper[33013]: I0313 11:17:01.142495 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 11:17:01.650114 master-0 kubenswrapper[33013]: I0313 11:17:01.650050 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 13 11:17:29.489344 master-0 kubenswrapper[33013]: I0313 11:17:29.489273 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-6dd6777c94-dp5bz"] Mar 13 11:17:29.490081 master-0 kubenswrapper[33013]: I0313 11:17:29.489531 33013 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" podUID="a4a079d7-e6d5-4622-87db-714e92f42458" containerName="sushy-emulator" containerID="cri-o://294b64d4c59443bfa39ef5519bc2f20e9fb2cf31cf6fdcb7c2b886cb1577a014" gracePeriod=30 Mar 13 11:17:30.013866 master-0 kubenswrapper[33013]: I0313 11:17:30.008511 33013 generic.go:334] "Generic (PLEG): container finished" podID="a4a079d7-e6d5-4622-87db-714e92f42458" containerID="294b64d4c59443bfa39ef5519bc2f20e9fb2cf31cf6fdcb7c2b886cb1577a014" exitCode=0 Mar 13 11:17:30.013866 master-0 kubenswrapper[33013]: I0313 11:17:30.008653 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" event={"ID":"a4a079d7-e6d5-4622-87db-714e92f42458","Type":"ContainerDied","Data":"294b64d4c59443bfa39ef5519bc2f20e9fb2cf31cf6fdcb7c2b886cb1577a014"} Mar 13 11:17:30.345362 master-0 kubenswrapper[33013]: I0313 11:17:30.345308 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:17:30.400905 master-0 kubenswrapper[33013]: I0313 11:17:30.400700 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zqcf\" (UniqueName: \"kubernetes.io/projected/a4a079d7-e6d5-4622-87db-714e92f42458-kube-api-access-4zqcf\") pod \"a4a079d7-e6d5-4622-87db-714e92f42458\" (UID: \"a4a079d7-e6d5-4622-87db-714e92f42458\") " Mar 13 11:17:30.402124 master-0 kubenswrapper[33013]: I0313 11:17:30.402006 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/a4a079d7-e6d5-4622-87db-714e92f42458-sushy-emulator-config\") pod \"a4a079d7-e6d5-4622-87db-714e92f42458\" (UID: \"a4a079d7-e6d5-4622-87db-714e92f42458\") " Mar 13 11:17:30.402124 master-0 kubenswrapper[33013]: I0313 11:17:30.402069 33013 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/a4a079d7-e6d5-4622-87db-714e92f42458-os-client-config\") pod \"a4a079d7-e6d5-4622-87db-714e92f42458\" (UID: \"a4a079d7-e6d5-4622-87db-714e92f42458\") " Mar 13 11:17:30.403768 master-0 kubenswrapper[33013]: I0313 11:17:30.403692 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4a079d7-e6d5-4622-87db-714e92f42458-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "a4a079d7-e6d5-4622-87db-714e92f42458" (UID: "a4a079d7-e6d5-4622-87db-714e92f42458"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 11:17:30.412898 master-0 kubenswrapper[33013]: I0313 11:17:30.409004 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a079d7-e6d5-4622-87db-714e92f42458-kube-api-access-4zqcf" (OuterVolumeSpecName: "kube-api-access-4zqcf") pod "a4a079d7-e6d5-4622-87db-714e92f42458" (UID: "a4a079d7-e6d5-4622-87db-714e92f42458"). InnerVolumeSpecName "kube-api-access-4zqcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 11:17:30.421390 master-0 kubenswrapper[33013]: I0313 11:17:30.421306 33013 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4a079d7-e6d5-4622-87db-714e92f42458-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "a4a079d7-e6d5-4622-87db-714e92f42458" (UID: "a4a079d7-e6d5-4622-87db-714e92f42458"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 11:17:30.439680 master-0 kubenswrapper[33013]: I0313 11:17:30.439511 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-6759f57b8c-6shx9"] Mar 13 11:17:30.440170 master-0 kubenswrapper[33013]: E0313 11:17:30.440143 33013 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a079d7-e6d5-4622-87db-714e92f42458" containerName="sushy-emulator" Mar 13 11:17:30.440170 master-0 kubenswrapper[33013]: I0313 11:17:30.440166 33013 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a079d7-e6d5-4622-87db-714e92f42458" containerName="sushy-emulator" Mar 13 11:17:30.440506 master-0 kubenswrapper[33013]: I0313 11:17:30.440486 33013 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4a079d7-e6d5-4622-87db-714e92f42458" containerName="sushy-emulator" Mar 13 11:17:30.441667 master-0 kubenswrapper[33013]: I0313 11:17:30.441596 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:17:30.459484 master-0 kubenswrapper[33013]: I0313 11:17:30.459418 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-6759f57b8c-6shx9"] Mar 13 11:17:30.506257 master-0 kubenswrapper[33013]: I0313 11:17:30.506065 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/12f21bdb-8dce-4600-a976-bfaf1177539f-sushy-emulator-config\") pod \"sushy-emulator-6759f57b8c-6shx9\" (UID: \"12f21bdb-8dce-4600-a976-bfaf1177539f\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:17:30.506878 master-0 kubenswrapper[33013]: I0313 11:17:30.506285 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/12f21bdb-8dce-4600-a976-bfaf1177539f-os-client-config\") pod \"sushy-emulator-6759f57b8c-6shx9\" (UID: \"12f21bdb-8dce-4600-a976-bfaf1177539f\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:17:30.506878 master-0 kubenswrapper[33013]: I0313 11:17:30.506427 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvc2j\" (UniqueName: \"kubernetes.io/projected/12f21bdb-8dce-4600-a976-bfaf1177539f-kube-api-access-dvc2j\") pod \"sushy-emulator-6759f57b8c-6shx9\" (UID: \"12f21bdb-8dce-4600-a976-bfaf1177539f\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:17:30.506878 master-0 kubenswrapper[33013]: I0313 11:17:30.506614 33013 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zqcf\" (UniqueName: \"kubernetes.io/projected/a4a079d7-e6d5-4622-87db-714e92f42458-kube-api-access-4zqcf\") on node \"master-0\" DevicePath \"\"" Mar 13 11:17:30.506878 master-0 kubenswrapper[33013]: I0313 11:17:30.506635 33013 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/a4a079d7-e6d5-4622-87db-714e92f42458-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:17:30.506878 master-0 kubenswrapper[33013]: I0313 11:17:30.506651 33013 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/a4a079d7-e6d5-4622-87db-714e92f42458-os-client-config\") on node \"master-0\" DevicePath \"\"" Mar 13 11:17:30.609568 master-0 kubenswrapper[33013]: I0313 11:17:30.608869 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/12f21bdb-8dce-4600-a976-bfaf1177539f-os-client-config\") pod \"sushy-emulator-6759f57b8c-6shx9\" (UID: \"12f21bdb-8dce-4600-a976-bfaf1177539f\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:17:30.610069 master-0 kubenswrapper[33013]: I0313 11:17:30.609664 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvc2j\" (UniqueName: \"kubernetes.io/projected/12f21bdb-8dce-4600-a976-bfaf1177539f-kube-api-access-dvc2j\") pod \"sushy-emulator-6759f57b8c-6shx9\" (UID: \"12f21bdb-8dce-4600-a976-bfaf1177539f\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:17:30.610069 master-0 kubenswrapper[33013]: I0313 11:17:30.609938 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/12f21bdb-8dce-4600-a976-bfaf1177539f-sushy-emulator-config\") pod \"sushy-emulator-6759f57b8c-6shx9\" (UID: \"12f21bdb-8dce-4600-a976-bfaf1177539f\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:17:30.611465 master-0 kubenswrapper[33013]: I0313 11:17:30.611182 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/12f21bdb-8dce-4600-a976-bfaf1177539f-sushy-emulator-config\") pod \"sushy-emulator-6759f57b8c-6shx9\" (UID: \"12f21bdb-8dce-4600-a976-bfaf1177539f\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:17:30.612856 master-0 kubenswrapper[33013]: I0313 11:17:30.612800 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/12f21bdb-8dce-4600-a976-bfaf1177539f-os-client-config\") pod \"sushy-emulator-6759f57b8c-6shx9\" (UID: \"12f21bdb-8dce-4600-a976-bfaf1177539f\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:17:30.624624 master-0 kubenswrapper[33013]: I0313 11:17:30.624563 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvc2j\" (UniqueName: \"kubernetes.io/projected/12f21bdb-8dce-4600-a976-bfaf1177539f-kube-api-access-dvc2j\") pod \"sushy-emulator-6759f57b8c-6shx9\" (UID: \"12f21bdb-8dce-4600-a976-bfaf1177539f\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:17:30.815888 master-0 kubenswrapper[33013]: I0313 11:17:30.815570 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:17:31.033423 master-0 kubenswrapper[33013]: I0313 11:17:31.033356 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" event={"ID":"a4a079d7-e6d5-4622-87db-714e92f42458","Type":"ContainerDied","Data":"eceef98118832f3c64205b08536b55d2c333db5459741df541ec269fa9b78489"} Mar 13 11:17:31.033423 master-0 kubenswrapper[33013]: I0313 11:17:31.033428 33013 scope.go:117] "RemoveContainer" containerID="294b64d4c59443bfa39ef5519bc2f20e9fb2cf31cf6fdcb7c2b886cb1577a014" Mar 13 11:17:31.033718 master-0 kubenswrapper[33013]: I0313 11:17:31.033505 33013 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-6dd6777c94-dp5bz" Mar 13 11:17:31.082181 master-0 kubenswrapper[33013]: I0313 11:17:31.081411 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-6dd6777c94-dp5bz"] Mar 13 11:17:31.094757 master-0 kubenswrapper[33013]: I0313 11:17:31.094589 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-6dd6777c94-dp5bz"] Mar 13 11:17:31.375045 master-0 kubenswrapper[33013]: W0313 11:17:31.374989 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12f21bdb_8dce_4600_a976_bfaf1177539f.slice/crio-9725dd4f8a36ccbef2f21247eefa64017497bb4a221415168df2c576ad4efc96 WatchSource:0}: Error finding container 9725dd4f8a36ccbef2f21247eefa64017497bb4a221415168df2c576ad4efc96: Status 404 returned error can't find the container with id 9725dd4f8a36ccbef2f21247eefa64017497bb4a221415168df2c576ad4efc96 Mar 13 11:17:31.380941 master-0 kubenswrapper[33013]: I0313 11:17:31.380882 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-6759f57b8c-6shx9"] Mar 13 11:17:32.052799 master-0 kubenswrapper[33013]: I0313 11:17:32.052706 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" event={"ID":"12f21bdb-8dce-4600-a976-bfaf1177539f","Type":"ContainerStarted","Data":"d086090ef30452ae054df9c7ae930d46630fc7a2a487872cb33f041b78544586"} Mar 13 11:17:32.052799 master-0 kubenswrapper[33013]: I0313 11:17:32.052781 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" event={"ID":"12f21bdb-8dce-4600-a976-bfaf1177539f","Type":"ContainerStarted","Data":"9725dd4f8a36ccbef2f21247eefa64017497bb4a221415168df2c576ad4efc96"} Mar 13 11:17:32.087121 master-0 kubenswrapper[33013]: I0313 11:17:32.087008 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" podStartSLOduration=2.086980411 podStartE2EDuration="2.086980411s" podCreationTimestamp="2026-03-13 11:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:17:32.07438144 +0000 UTC m=+1235.550334799" watchObservedRunningTime="2026-03-13 11:17:32.086980411 +0000 UTC m=+1235.562933920" Mar 13 11:17:32.726449 master-0 kubenswrapper[33013]: I0313 11:17:32.726327 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a079d7-e6d5-4622-87db-714e92f42458" path="/var/lib/kubelet/pods/a4a079d7-e6d5-4622-87db-714e92f42458/volumes" Mar 13 11:17:40.816327 master-0 kubenswrapper[33013]: I0313 11:17:40.816261 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:17:40.816327 master-0 kubenswrapper[33013]: I0313 11:17:40.816323 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:17:40.831540 master-0 kubenswrapper[33013]: I0313 11:17:40.831471 33013 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:17:41.162569 master-0 kubenswrapper[33013]: I0313 11:17:41.162448 33013 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-6759f57b8c-6shx9" Mar 13 11:18:43.513830 master-0 kubenswrapper[33013]: I0313 11:18:43.513778 33013 trace.go:236] Trace[1492457508]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (13-Mar-2026 11:18:42.222) (total time: 1291ms): Mar 13 11:18:43.513830 master-0 kubenswrapper[33013]: Trace[1492457508]: [1.29169712s] [1.29169712s] END Mar 13 11:19:10.125731 master-0 kubenswrapper[33013]: I0313 11:19:10.125676 33013 scope.go:117] "RemoveContainer" containerID="57e41afa6e85fd3eb4cc687b9c4837a1560d60e8d436b4af6ed87d204392fd44" Mar 13 11:19:10.151290 master-0 kubenswrapper[33013]: I0313 11:19:10.151262 33013 scope.go:117] "RemoveContainer" containerID="8afbc27bd04f92cf0271394d94f167bd7df1504d75c1b9e9b99352e5c9f04373" Mar 13 11:19:10.171807 master-0 kubenswrapper[33013]: I0313 11:19:10.171766 33013 scope.go:117] "RemoveContainer" containerID="a8093acf32ff7e518db478a11ffab4795f90f6d19750181ab05d58a7423594e2" Mar 13 11:19:10.203973 master-0 kubenswrapper[33013]: I0313 11:19:10.203821 33013 scope.go:117] "RemoveContainer" containerID="d96cd90540cc27d403a8cef45a3bdf6266f3e5a99e7e6e0d0eef53846290d34d" Mar 13 11:20:10.286101 master-0 kubenswrapper[33013]: I0313 11:20:10.286024 33013 scope.go:117] "RemoveContainer" containerID="a5e2bfc5e6a076e2ec2afcf0d059532ea53005041d4e5c7f2d32740bd0be3c66" Mar 13 11:20:10.317560 master-0 kubenswrapper[33013]: I0313 11:20:10.317493 33013 scope.go:117] "RemoveContainer" containerID="9ec8a97cc52d7f21ac72b8dc746b2e0de311feec67514ee43f02603c62f7d9e1" Mar 13 11:20:10.339811 master-0 kubenswrapper[33013]: I0313 11:20:10.339740 33013 scope.go:117] "RemoveContainer" containerID="3692b0584989563d0422d037f87c5d4b9d67a1374feee6226d3662560cc4d392" Mar 13 11:20:10.361054 master-0 kubenswrapper[33013]: I0313 11:20:10.361014 33013 scope.go:117] "RemoveContainer" containerID="b31c021ca0ad71c5cbd5655b2a563b3647021150402cf3e523799684f7cd9c4f" Mar 13 11:20:10.387699 master-0 kubenswrapper[33013]: I0313 11:20:10.387649 33013 scope.go:117] "RemoveContainer" containerID="f8a43fe1ddc91b52f91b0ee9e6d62dbfe00a0b9a8cb023da4ae58b9e602c364a" Mar 13 11:22:10.546222 master-0 kubenswrapper[33013]: I0313 11:22:10.546165 33013 scope.go:117] "RemoveContainer" containerID="6d2380721f08b36133925a65ed5da4fa642eb82475be6a878b2a2b72a25439b9" Mar 13 11:22:10.569383 master-0 kubenswrapper[33013]: I0313 11:22:10.569333 33013 scope.go:117] "RemoveContainer" containerID="93f426cebcd9c8d999da94a774ef650c22797daadacaf9ad490b0e24b3f21e4b" Mar 13 11:22:30.084020 master-0 kubenswrapper[33013]: I0313 11:22:30.083928 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-b7cs5"] Mar 13 11:22:30.107385 master-0 kubenswrapper[33013]: I0313 11:22:30.107287 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2dc3-account-create-update-8vftc"] Mar 13 11:22:30.123571 master-0 kubenswrapper[33013]: I0313 11:22:30.123492 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-b7cs5"] Mar 13 11:22:30.135723 master-0 kubenswrapper[33013]: I0313 11:22:30.135629 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2dc3-account-create-update-8vftc"] Mar 13 11:22:30.752775 master-0 kubenswrapper[33013]: I0313 11:22:30.752574 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8db9c8d1-1f9e-46a2-b1b6-9398919b760b" path="/var/lib/kubelet/pods/8db9c8d1-1f9e-46a2-b1b6-9398919b760b/volumes" Mar 13 11:22:30.753767 master-0 kubenswrapper[33013]: I0313 11:22:30.753735 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d2c12d-e0f4-4826-a942-52d09d6ff4ca" path="/var/lib/kubelet/pods/b1d2c12d-e0f4-4826-a942-52d09d6ff4ca/volumes" Mar 13 11:22:34.072191 master-0 kubenswrapper[33013]: I0313 11:22:34.072010 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-ll5tp"] Mar 13 11:22:34.097265 master-0 kubenswrapper[33013]: I0313 11:22:34.097203 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-pd8xv"] Mar 13 11:22:34.131677 master-0 kubenswrapper[33013]: I0313 11:22:34.130430 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-ll5tp"] Mar 13 11:22:34.162627 master-0 kubenswrapper[33013]: I0313 11:22:34.161672 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7d01-account-create-update-2wzz6"] Mar 13 11:22:34.178628 master-0 kubenswrapper[33013]: I0313 11:22:34.177967 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-pd8xv"] Mar 13 11:22:34.197616 master-0 kubenswrapper[33013]: I0313 11:22:34.196950 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7d01-account-create-update-2wzz6"] Mar 13 11:22:34.217303 master-0 kubenswrapper[33013]: I0313 11:22:34.215303 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8b1a-account-create-update-gbm6q"] Mar 13 11:22:34.230119 master-0 kubenswrapper[33013]: I0313 11:22:34.230040 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8b1a-account-create-update-gbm6q"] Mar 13 11:22:34.730206 master-0 kubenswrapper[33013]: I0313 11:22:34.730131 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae0d894-8db9-48e7-9ac5-776c822a483c" path="/var/lib/kubelet/pods/0ae0d894-8db9-48e7-9ac5-776c822a483c/volumes" Mar 13 11:22:34.731083 master-0 kubenswrapper[33013]: I0313 11:22:34.731054 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5923a466-b63b-4110-a0d0-535eb1eb2d09" path="/var/lib/kubelet/pods/5923a466-b63b-4110-a0d0-535eb1eb2d09/volumes" Mar 13 11:22:34.732386 master-0 kubenswrapper[33013]: I0313 11:22:34.732356 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad76c9b2-369d-443a-abe4-09a4081e67de" path="/var/lib/kubelet/pods/ad76c9b2-369d-443a-abe4-09a4081e67de/volumes" Mar 13 11:22:34.733852 master-0 kubenswrapper[33013]: I0313 11:22:34.733818 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c16c74e7-f812-472d-9023-596975e4f499" path="/var/lib/kubelet/pods/c16c74e7-f812-472d-9023-596975e4f499/volumes" Mar 13 11:22:51.056922 master-0 kubenswrapper[33013]: I0313 11:22:51.056542 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fsnm8"] Mar 13 11:22:51.086309 master-0 kubenswrapper[33013]: I0313 11:22:51.086218 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fsnm8"] Mar 13 11:22:52.735818 master-0 kubenswrapper[33013]: I0313 11:22:52.734785 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0906c20c-c44b-4754-921c-3c934a52b11d" path="/var/lib/kubelet/pods/0906c20c-c44b-4754-921c-3c934a52b11d/volumes" Mar 13 11:22:58.044615 master-0 kubenswrapper[33013]: I0313 11:22:58.042446 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8a81-account-create-update-mtcg5"] Mar 13 11:22:58.058628 master-0 kubenswrapper[33013]: I0313 11:22:58.057648 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8a81-account-create-update-mtcg5"] Mar 13 11:22:58.750863 master-0 kubenswrapper[33013]: I0313 11:22:58.750291 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7529222b-1d6b-439e-8e73-023ecc18255a" path="/var/lib/kubelet/pods/7529222b-1d6b-439e-8e73-023ecc18255a/volumes" Mar 13 11:22:59.038009 master-0 kubenswrapper[33013]: I0313 11:22:59.037764 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ggtxn"] Mar 13 11:22:59.055257 master-0 kubenswrapper[33013]: I0313 11:22:59.055129 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ggtxn"] Mar 13 11:23:00.054556 master-0 kubenswrapper[33013]: I0313 11:23:00.054468 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c988-account-create-update-tbhzn"] Mar 13 11:23:00.067487 master-0 kubenswrapper[33013]: I0313 11:23:00.067417 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-t62cj"] Mar 13 11:23:00.080278 master-0 kubenswrapper[33013]: I0313 11:23:00.080206 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-t62cj"] Mar 13 11:23:00.093211 master-0 kubenswrapper[33013]: I0313 11:23:00.093133 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c988-account-create-update-tbhzn"] Mar 13 11:23:00.728514 master-0 kubenswrapper[33013]: I0313 11:23:00.728457 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22471c80-7d02-4478-a2d4-4ae9e68cb328" path="/var/lib/kubelet/pods/22471c80-7d02-4478-a2d4-4ae9e68cb328/volumes" Mar 13 11:23:00.729118 master-0 kubenswrapper[33013]: I0313 11:23:00.729093 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ceb46b-d857-4ddb-82c4-dbbb416ad706" path="/var/lib/kubelet/pods/25ceb46b-d857-4ddb-82c4-dbbb416ad706/volumes" Mar 13 11:23:00.729678 master-0 kubenswrapper[33013]: I0313 11:23:00.729655 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b449da8c-7bed-422e-bbf5-843c97f4b73b" path="/var/lib/kubelet/pods/b449da8c-7bed-422e-bbf5-843c97f4b73b/volumes" Mar 13 11:23:06.042000 master-0 kubenswrapper[33013]: I0313 11:23:06.041924 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-69lvv"] Mar 13 11:23:06.056598 master-0 kubenswrapper[33013]: I0313 11:23:06.056475 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-69lvv"] Mar 13 11:23:06.727579 master-0 kubenswrapper[33013]: I0313 11:23:06.727502 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec43ecb9-e354-475a-aa0e-4dbe06716927" path="/var/lib/kubelet/pods/ec43ecb9-e354-475a-aa0e-4dbe06716927/volumes" Mar 13 11:23:10.064180 master-0 kubenswrapper[33013]: I0313 11:23:10.064082 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6zpjq"] Mar 13 11:23:10.080044 master-0 kubenswrapper[33013]: I0313 11:23:10.079958 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6zpjq"] Mar 13 11:23:10.642220 master-0 kubenswrapper[33013]: I0313 11:23:10.642151 33013 scope.go:117] "RemoveContainer" containerID="bf3f7f58e1286b34b0194ced784fa72b49a7d468fcf51265b018f83d3711cdd8" Mar 13 11:23:10.669975 master-0 kubenswrapper[33013]: I0313 11:23:10.669915 33013 scope.go:117] "RemoveContainer" containerID="fecfeba4163b0c48e9b72f3cf6d67e455a7a60301fb55e404b66dc2579c87209" Mar 13 11:23:10.707452 master-0 kubenswrapper[33013]: I0313 11:23:10.707162 33013 scope.go:117] "RemoveContainer" containerID="03bd0c5745b80a7a19932a5392eb736e806d45816319e270036c62b8bfb2634a" Mar 13 11:23:10.735933 master-0 kubenswrapper[33013]: I0313 11:23:10.735850 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7be7f77-638d-446e-b9a4-13195f124ca0" path="/var/lib/kubelet/pods/d7be7f77-638d-446e-b9a4-13195f124ca0/volumes" Mar 13 11:23:10.741388 master-0 kubenswrapper[33013]: I0313 11:23:10.741324 33013 scope.go:117] "RemoveContainer" containerID="14db1c6a5dc645fba7b9f6fec826d2c7b5cd75b9acd71c3e98e311b0284c699e" Mar 13 11:23:10.773721 master-0 kubenswrapper[33013]: I0313 11:23:10.773640 33013 scope.go:117] "RemoveContainer" containerID="202ff16ce6cc503330a4aa39c9d938d3c4e72a43d474a9fa2922a928c2fc455e" Mar 13 11:23:10.799102 master-0 kubenswrapper[33013]: I0313 11:23:10.799055 33013 scope.go:117] "RemoveContainer" containerID="5d4551c61a673e60df50d2ec926f46c649584bc356644ef6f4ae35c9e93d839f" Mar 13 11:23:10.826769 master-0 kubenswrapper[33013]: I0313 11:23:10.826706 33013 scope.go:117] "RemoveContainer" containerID="3738685ff15ea7554790af06095301059472524beb5eb802a45832df41cbbb42" Mar 13 11:23:10.851605 master-0 kubenswrapper[33013]: I0313 11:23:10.851517 33013 scope.go:117] "RemoveContainer" containerID="4caaf804b22452dab97d8abd6a9c76e0e654fe32002f200f9a518dd52b2b3454" Mar 13 11:23:10.880869 master-0 kubenswrapper[33013]: I0313 11:23:10.880802 33013 scope.go:117] "RemoveContainer" containerID="6c9a8a313191498fcd9b8150c1c9682d31c2866a831970c79db85060f9ff1a8e" Mar 13 11:23:10.908254 master-0 kubenswrapper[33013]: I0313 11:23:10.908181 33013 scope.go:117] "RemoveContainer" containerID="eae752490a2297ebc0b179f885f7a0ffca02cda2ce9ec68d9d7c128df5a9fe8e" Mar 13 11:23:10.938236 master-0 kubenswrapper[33013]: I0313 11:23:10.938144 33013 scope.go:117] "RemoveContainer" containerID="da6c0f955c283dc17f7d75234bdde669ae29d9f4e9cced3bbc5a6b9f5e133f87" Mar 13 11:23:10.973299 master-0 kubenswrapper[33013]: I0313 11:23:10.973244 33013 scope.go:117] "RemoveContainer" containerID="5540843b36d3356e6ca9fea2549eef0ae2ef07e4a4e43269a3bb15ce9e503819" Mar 13 11:23:11.003378 master-0 kubenswrapper[33013]: I0313 11:23:11.003327 33013 scope.go:117] "RemoveContainer" containerID="ae8dce7e3bc7efb355f3ec109360ce5165b206bdded212cfd693a71917ad2baa" Mar 13 11:23:16.056226 master-0 kubenswrapper[33013]: I0313 11:23:16.055393 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-w9jfw"] Mar 13 11:23:16.075549 master-0 kubenswrapper[33013]: I0313 11:23:16.075086 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-w9jfw"] Mar 13 11:23:16.095765 master-0 kubenswrapper[33013]: I0313 11:23:16.095670 33013 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-17f4-account-create-update-pgqvs"] Mar 13 11:23:16.106220 master-0 kubenswrapper[33013]: I0313 11:23:16.106130 33013 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-17f4-account-create-update-pgqvs"] Mar 13 11:23:16.735327 master-0 kubenswrapper[33013]: I0313 11:23:16.734500 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63be249f-23c2-4c9a-a6f3-3f9355da4f66" path="/var/lib/kubelet/pods/63be249f-23c2-4c9a-a6f3-3f9355da4f66/volumes" Mar 13 11:23:16.735716 master-0 kubenswrapper[33013]: I0313 11:23:16.735655 33013 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b901e4-e1c4-41bf-8083-31d19c301c44" path="/var/lib/kubelet/pods/65b901e4-e1c4-41bf-8083-31d19c301c44/volumes" Mar 13 11:23:25.461981 master-0 kubenswrapper[33013]: I0313 11:23:25.461898 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-667w8/must-gather-98kzh"] Mar 13 11:23:25.466899 master-0 kubenswrapper[33013]: I0313 11:23:25.466850 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-667w8/must-gather-98kzh" Mar 13 11:23:25.497636 master-0 kubenswrapper[33013]: I0313 11:23:25.497511 33013 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-667w8/must-gather-mh4hx"] Mar 13 11:23:25.500491 master-0 kubenswrapper[33013]: I0313 11:23:25.500446 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-667w8/must-gather-mh4hx" Mar 13 11:23:25.500707 master-0 kubenswrapper[33013]: I0313 11:23:25.500685 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-667w8/must-gather-mh4hx"] Mar 13 11:23:25.507623 master-0 kubenswrapper[33013]: I0313 11:23:25.507231 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-667w8"/"kube-root-ca.crt" Mar 13 11:23:25.507623 master-0 kubenswrapper[33013]: I0313 11:23:25.507290 33013 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-667w8"/"openshift-service-ca.crt" Mar 13 11:23:25.516623 master-0 kubenswrapper[33013]: I0313 11:23:25.516059 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-667w8/must-gather-98kzh"] Mar 13 11:23:25.541290 master-0 kubenswrapper[33013]: I0313 11:23:25.541081 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/40cad107-7ba2-4254-a529-6a40d4c4086f-must-gather-output\") pod \"must-gather-98kzh\" (UID: \"40cad107-7ba2-4254-a529-6a40d4c4086f\") " pod="openshift-must-gather-667w8/must-gather-98kzh" Mar 13 11:23:25.541482 master-0 kubenswrapper[33013]: I0313 11:23:25.541384 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvk6l\" (UniqueName: \"kubernetes.io/projected/40cad107-7ba2-4254-a529-6a40d4c4086f-kube-api-access-lvk6l\") pod \"must-gather-98kzh\" (UID: \"40cad107-7ba2-4254-a529-6a40d4c4086f\") " pod="openshift-must-gather-667w8/must-gather-98kzh" Mar 13 11:23:25.644087 master-0 kubenswrapper[33013]: I0313 11:23:25.644021 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/40cad107-7ba2-4254-a529-6a40d4c4086f-must-gather-output\") pod \"must-gather-98kzh\" (UID: \"40cad107-7ba2-4254-a529-6a40d4c4086f\") " pod="openshift-must-gather-667w8/must-gather-98kzh" Mar 13 11:23:25.644087 master-0 kubenswrapper[33013]: I0313 11:23:25.644088 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7s9v\" (UniqueName: \"kubernetes.io/projected/959e6caf-9bec-4e33-a31f-0c4b5b182192-kube-api-access-s7s9v\") pod \"must-gather-mh4hx\" (UID: \"959e6caf-9bec-4e33-a31f-0c4b5b182192\") " pod="openshift-must-gather-667w8/must-gather-mh4hx" Mar 13 11:23:25.644468 master-0 kubenswrapper[33013]: I0313 11:23:25.644263 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvk6l\" (UniqueName: \"kubernetes.io/projected/40cad107-7ba2-4254-a529-6a40d4c4086f-kube-api-access-lvk6l\") pod \"must-gather-98kzh\" (UID: \"40cad107-7ba2-4254-a529-6a40d4c4086f\") " pod="openshift-must-gather-667w8/must-gather-98kzh" Mar 13 11:23:25.644468 master-0 kubenswrapper[33013]: I0313 11:23:25.644311 33013 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/959e6caf-9bec-4e33-a31f-0c4b5b182192-must-gather-output\") pod \"must-gather-mh4hx\" (UID: \"959e6caf-9bec-4e33-a31f-0c4b5b182192\") " pod="openshift-must-gather-667w8/must-gather-mh4hx" Mar 13 11:23:25.648847 master-0 kubenswrapper[33013]: I0313 11:23:25.644871 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/40cad107-7ba2-4254-a529-6a40d4c4086f-must-gather-output\") pod \"must-gather-98kzh\" (UID: \"40cad107-7ba2-4254-a529-6a40d4c4086f\") " pod="openshift-must-gather-667w8/must-gather-98kzh" Mar 13 11:23:25.678972 master-0 kubenswrapper[33013]: I0313 11:23:25.678857 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvk6l\" (UniqueName: \"kubernetes.io/projected/40cad107-7ba2-4254-a529-6a40d4c4086f-kube-api-access-lvk6l\") pod \"must-gather-98kzh\" (UID: \"40cad107-7ba2-4254-a529-6a40d4c4086f\") " pod="openshift-must-gather-667w8/must-gather-98kzh" Mar 13 11:23:25.747505 master-0 kubenswrapper[33013]: I0313 11:23:25.747338 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/959e6caf-9bec-4e33-a31f-0c4b5b182192-must-gather-output\") pod \"must-gather-mh4hx\" (UID: \"959e6caf-9bec-4e33-a31f-0c4b5b182192\") " pod="openshift-must-gather-667w8/must-gather-mh4hx" Mar 13 11:23:25.747778 master-0 kubenswrapper[33013]: I0313 11:23:25.747518 33013 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7s9v\" (UniqueName: \"kubernetes.io/projected/959e6caf-9bec-4e33-a31f-0c4b5b182192-kube-api-access-s7s9v\") pod \"must-gather-mh4hx\" (UID: \"959e6caf-9bec-4e33-a31f-0c4b5b182192\") " pod="openshift-must-gather-667w8/must-gather-mh4hx" Mar 13 11:23:25.748205 master-0 kubenswrapper[33013]: I0313 11:23:25.748152 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/959e6caf-9bec-4e33-a31f-0c4b5b182192-must-gather-output\") pod \"must-gather-mh4hx\" (UID: \"959e6caf-9bec-4e33-a31f-0c4b5b182192\") " pod="openshift-must-gather-667w8/must-gather-mh4hx" Mar 13 11:23:25.770336 master-0 kubenswrapper[33013]: I0313 11:23:25.770228 33013 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7s9v\" (UniqueName: \"kubernetes.io/projected/959e6caf-9bec-4e33-a31f-0c4b5b182192-kube-api-access-s7s9v\") pod \"must-gather-mh4hx\" (UID: \"959e6caf-9bec-4e33-a31f-0c4b5b182192\") " pod="openshift-must-gather-667w8/must-gather-mh4hx" Mar 13 11:23:25.829791 master-0 kubenswrapper[33013]: I0313 11:23:25.829695 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-667w8/must-gather-98kzh" Mar 13 11:23:25.841835 master-0 kubenswrapper[33013]: I0313 11:23:25.841762 33013 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-667w8/must-gather-mh4hx" Mar 13 11:23:26.528748 master-0 kubenswrapper[33013]: I0313 11:23:26.528580 33013 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 11:23:26.530873 master-0 kubenswrapper[33013]: I0313 11:23:26.530791 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-667w8/must-gather-98kzh"] Mar 13 11:23:26.682675 master-0 kubenswrapper[33013]: W0313 11:23:26.682108 33013 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod959e6caf_9bec_4e33_a31f_0c4b5b182192.slice/crio-f5aee214fd2e51756eb79be6d223d2af3664724608296e8691bdd08dde175e03 WatchSource:0}: Error finding container f5aee214fd2e51756eb79be6d223d2af3664724608296e8691bdd08dde175e03: Status 404 returned error can't find the container with id f5aee214fd2e51756eb79be6d223d2af3664724608296e8691bdd08dde175e03 Mar 13 11:23:26.683129 master-0 kubenswrapper[33013]: I0313 11:23:26.683027 33013 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-667w8/must-gather-mh4hx"] Mar 13 11:23:26.830644 master-0 kubenswrapper[33013]: I0313 11:23:26.830503 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-667w8/must-gather-98kzh" event={"ID":"40cad107-7ba2-4254-a529-6a40d4c4086f","Type":"ContainerStarted","Data":"3d94f3031aeed9bf3f8c84b541fc617b20b31c0412cf573d1ec62725355853d8"} Mar 13 11:23:26.832304 master-0 kubenswrapper[33013]: I0313 11:23:26.832250 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-667w8/must-gather-mh4hx" event={"ID":"959e6caf-9bec-4e33-a31f-0c4b5b182192","Type":"ContainerStarted","Data":"f5aee214fd2e51756eb79be6d223d2af3664724608296e8691bdd08dde175e03"} Mar 13 11:23:28.870412 master-0 kubenswrapper[33013]: I0313 11:23:28.870307 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-667w8/must-gather-mh4hx" event={"ID":"959e6caf-9bec-4e33-a31f-0c4b5b182192","Type":"ContainerStarted","Data":"64d9e8b0d2c2b3588fcaf914a57c3f435984b1f63df559915ec328af3ada7445"} Mar 13 11:23:28.870412 master-0 kubenswrapper[33013]: I0313 11:23:28.870383 33013 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-667w8/must-gather-mh4hx" event={"ID":"959e6caf-9bec-4e33-a31f-0c4b5b182192","Type":"ContainerStarted","Data":"b178cad7d9ad2e3f8de34b69620bf1f1f219c7e5fd00a5f5951e871ff3cbf139"} Mar 13 11:23:28.905617 master-0 kubenswrapper[33013]: I0313 11:23:28.904182 33013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-667w8/must-gather-mh4hx" podStartSLOduration=2.7660311269999998 podStartE2EDuration="3.904152098s" podCreationTimestamp="2026-03-13 11:23:25 +0000 UTC" firstStartedPulling="2026-03-13 11:23:26.684672476 +0000 UTC m=+1590.160625825" lastFinishedPulling="2026-03-13 11:23:27.822793447 +0000 UTC m=+1591.298746796" observedRunningTime="2026-03-13 11:23:28.888185014 +0000 UTC m=+1592.364138363" watchObservedRunningTime="2026-03-13 11:23:28.904152098 +0000 UTC m=+1592.380105447" Mar 13 11:23:31.205364 master-0 kubenswrapper[33013]: I0313 11:23:31.203939 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-xmpst_0ac1a605-d2d5-4004-96f5-121c20555bde/cluster-version-operator/0.log" Mar 13 11:23:31.408274 master-0 kubenswrapper[33013]: I0313 11:23:31.408213 33013 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-xmpst_0ac1a605-d2d5-4004-96f5-121c20555bde/cluster-version-operator/1.log"